US federal regulations are full of laws that take something minor or completely legal, and add huge punishments because someone used technology. All the way back in 1952, fraud punishments were worse if someone used a telephone to commit fraud. In 1982 the Computer Fraud and Abuse Act added even more punishment, if someone used a computer.
Fraud is bad, and it should be illegal, but why have different punishments based on what technology someone used?
Laws like this go outside of fraud, and often are clearly unconstitutional, like the Unlawful Internet Gambling Enforcement Act of 2006, which made lawful gambling illegal too, until it was effectively overturned with Murphy v. National Collegiate Athletic Association in 2018.
>but why have different punishments based on what technology someone used?
So first as foundation, I see no reason to pretend that the law is always perfectly thought through and logical particularly when it comes to crime. And even when laws have been done for the time, that also doesn't mean circumstances haven't changed over the decades while the law remained static.
That said, in principle punishment embodies multiple components and a major aspect is deterrence. The deterrence value in turn interplays with components like barrier to entry, scaling of the potential harm and the likelihood of getting caught. Usage of technology can have a significant impact on all of this. It's significantly more challenging and expensive to prosecute crimes that stretch across many jurisdictions, technology can also have a multiplier effect allowing criminal actors to go after far more people, both in terms of raw numbers and in terms of finding the small percentage of the vulnerable, and perceived anonymity/impunity can further increase number of actors and their activity levels. It also has often implied a higher degree of sophistication.
All of that weighs towards a higher level of punishment even as pure game theory. That doesn't mean the present levels are correct or shouldn't be only a part of other aspects of fighting fraud that depressingly frequently get neglected, but it's not irrational to punish more when criminals are generating more damage and working hard to decrease the chance of facing any penalties at all.
> So first as foundation, I see no reason to pretend that the law is always perfectly thought through and logical particularly when it comes to crime.
You will of course never reach perfection, but considering that when a law is applied, a lot of violence (police, jail, ...) gets involved, a politician who does not dedicate his life towards making the laws as perfect as humanly possible (with the ideal of finding an imperfection in the laws as big of a human breakthrough as the dicovery of quantum physics or general relativity) clearly does not deserve to be elected.
>a politician who does not dedicate his life towards making the laws as perfect as humanly possible (...) clearly does not deserve to be elected.
Oh, sweet summer child. Not attempting to make laws "as perfect as humanly possible" is the least of our worries with politicians!
Most of them dedicated their life actively towards the opposite, to make the laws as bad as posisble: out of ideology, out of being paid by lobbies and monopolies, personal interest, and so on.
You have to levers to enforce law. You can get better at catching lawbreakers or punish those that are caught harder. There are studies that show that catching a higher percentage of criminals and punishing them in a timely fashion leads to lower crime than punishing those you do catch harder. Europe in general has more police officers per capita and higher conviction rates that happen more timely. The US on the other hand spends more on prisons and has her officers. I think this is partially cultural and due to how responsibilities and finance are set up between local, state and federal government in the US.
Fraud via phone or computer is harder to catch. So the US follows it's established pattern and instead of hitting efforts for law enforcement increases punishment
Europe has a similar problem of over-punishing "crimes with a computer". In many EU countries, there's no punishment for trespassing, but even accessing an open network share that you found on Shodan, looking around out of curiosity, then disconnecting, is punishable with prison time.
The big problem with CFAA isn't particular to CFAA at all; it's that it shares the 2B1.1 loss table with all the other federal criminal statutes, and computers are very good and very fast at running the number on that table up. It's a real problem and I'm not pushing back on the idea that something should change about it, but I wouldn't characterize the problem the way you do, as the law singling out crimes involving computers.
Part of the history of CFAA was that it was passed because the state of the law preceding it didn't comfortably criminalize things like malicious hacking and denial of service; you can do those things without tripping over wire fraud.
> it's that it shares the 2B1.1 loss table with all the other federal criminal statutes, and computers are very good and very fast at running the number on that table up.
That's a problem with it, but another big one is that it's inherently ambiguous.
The normal way you know if you're authorized to do something with a computer is that it processes the request. They're perfectly capable of refusing; you get "forbidden" or "access denied" but in that case you're not actually accessing it, you're just being informed that you're not allowed to right now. So for there to be a violation the computer would have to let you do something it isn't supposed to. But how are you supposed to know that then?
On a lot of websites -- like this one -- you go to a page like https://news.ycombinator.com/user?id=<user_id> and you get the user's profile. If you put in your user there then you can see your email address and edit your profile etc. If the server was misconfigured and showing everyone's email address when it isn't supposed to, how is someone supposed to know that? Venmo publishes their users' financial transactions. If you notice that and think it's weird and write a post about it, should the company -- possibly retroactively -- be able to decide that the data you're criticizing them for making public wasn't intended to be, and therefore your accessing it was a crime? If you notice this by accident when it's obvious the data shouldn't be public -- you saw it when you made a typo in the URL -- should there be a law that can put you in jail if you admit to this in the process of making the public aware of the company's mistake, even if your access resulted in no harm to anyone?
The wording is too vague and it criminalizes too much. "Malicious hacking" might not always be wire fraud but in other cases it could be misappropriation of trade secrets etc., i.e. whatever the actual act of malice is. The problem with the CFAA is that it's more or less attempting to be the federal computer law against burglary (i.e. unlawful entry with the intent to commit a crime) except that it makes the "unlawful" part too hard to pin down and failed to include the part about intent to commit a crime, which allows it to be applied against people it ought not to.
Legislative overreach that leads to an almost total reliance on prosecutorial discretion is a terrible way to run a society. The moment that federal prosecutors stop being obsessed with 100% conviction rates, the whole weaponized process becomes tyrannical overnight. Regardless of innocence, most people get advised today to take the dramatically reduced plea bargain because of the extortion-tier penalties for most crimes; we barely use trials to establish facts and guilt any more.
So that's actually a big part of the problem. "Unauthorized" means what, that they abstractly don't like what you're doing? It's hard to tell what it really means because by its terms it prohibits way too much. Like it would plausibly be unconstitutional if they actually tried to enforce it that way. Which creates the expectation that things are unauthorized that potentially can't be prohibited, and that's the ambiguity. It's not that you don't know what you can't do, it's that it nominally prohibits so much that you don't know what you can do.
So then you get cases like Sandvig v. Barr where the researchers are assuming the thing they want to do isn't authorized even though that would be unreasonable and then they have to go to court over it. Which is how you get chilling effects, because not everyone has the resources to do that, and companies or the government can threaten people with prosecution to silence them without charges ever being brought because the accused doesn't want to experience "the process is the punishment" when the law doesn't make it sufficiently clear that what they're doing isn't illegal.
Sandvig "was brought by researchers who wished to find out whether employment websites engage in discrimination on the basis of race, gender or other protected characteristics" [1]. It was literally the researchers asking the question you asked and then getting an answer.
"he Court interpreted CFAA’s Access Provision rather narrowly to hold that the plaintiffs’ conduct was not criminal as they were neither exceeding authorized access, nor accessing password protected sites, but public sites. Construing violation of ToS as a potential crime under CFAA, the Court observed would allow private website owners to define the scope of criminal liability – thus constituting an improper delegation of legislative authority. Since their proposed actions were not criminal, the Court concluded that the researchers were free to conduct their study and dismissed the case."
Nobody was prosecuted. Researchers asked a clarifying question and got an answer.
Right. That's what I'm saying. It's used to intimidate people, which doesn't require actually prosecuting them because nearly all of them fold before it even gets to that point or are deterred from doing something they have a right to do because of the risk.
Let's remember how the process works. First they threaten you, then if you don't fold they do a more thorough investigation to try to find ways to prove their case which makes you spend significant resources, then they decide whether to actually prosecute you. They don't actually do it if they can't find a way to make you look like a criminal, but that's why it needs to be unambiguous from the outset that they won't be able to.
Otherwise people will fold at the point of being threatened because you'd have to spend resources you don't have and the deal you're offered gets worse because you made them work for it.
Post Van Buren, the legal concern in Sandvig (that doing "audit" studies that would require signing up for a bunch of accounts in ways that violate the ToS of commercial sites) is dead anyways everywhere in the US. The idea that mere violation of ToS is per se a violation of CFAA is off the table.
And we had to live under the ambiguity for more than three decades because the law was so poorly considered, and it's still not clear exactly what it covers.
Suppose some researchers are trying to collect enough data to see if a company is doing something untoward. They need a significant sample in order to figure it out, but the company has a very aggressive rate limit per IP address before they start giving HTTP 429 to that IP address for the rest of the day. If the researchers use more than one IP address so they can collect the data in less than 20 years, is that illegal? It shouldn't require a judge to be able to know that.
> we had to live under the ambiguity for more than three decades
Reality is infinitely complex. The law, meanwhile, is a construct.
One can always come up with anxious apparitions of hypothetical lawbreaking. (What if I’m murdered by a novel ceramic knife. The killer might get away!)
> If the researchers use more than one IP address so they can collect the data in less than 20 years, is that illegal? It shouldn't require a judge to be able to know that
It doesn't. The fact pattern you've just presented is settled law: it might be a tort, it might be some other violation of state law, but it's not a CFAA violation.
I feel like I purposely chose a fact pattern that couldn't meaningfully be distinguished from a DDoS except by the rate, which wasn't specified.
I get that there are cases where someone exceeded a rate limit by a moderate amount and that was fine -- although it's still bad that figuring that out required them to go to court to begin with -- but it seems like we're missing the thing that tells you where the line is. Unless it's really not a violation to just permanently render someone's site inaccessible because you have a lot more bandwidth than them and constantly want the latest version of whatever's on it?
Which is the problem with doing it this way. You don't have anyone working things through to come up with a good rule and give people clarity from the start, so instead it all gets decided slowly over time through expensive litigation.
the problem is it's only off the table until the Trump DOJ decides that they want to charge ex FBI members who investigated Trump with felonies for using an add blocker, and the supreme Court changes their mind since apparently the new law is that Trump can do whatever he wants
No? It's a Supreme Court precedent, established under Trump judges. At the point where you're saying that doesn't matter, you might as well just go the final rhetorical millimeter and say none of the law matters.
'JumpCrisscross is stipulating that you might be right in this analysis but observing that in practice CFAA doesn't play out that way. I'm instead going to go right at your argument and say that a precise definition of unauthorized access isn't necessary in the first place. The statute turns on intent. It's the burden of the prosecution to prove not just that some kind of access was pro-forma unauthorized, but also that the defendant should have known it was.
This is no different than zillions of other criminal statutes, the majority of which hinge on intent.
The problem with trying to make everything hinge on intent is that it's incredibly hard to prove what someone was thinking, which makes judges unreasonably sympathetic to the prosecution's inability to do it. And that frequently leads to various ways of diluting the intent requirement.
A common one is to apply the requirement narrowly. You didn't intend to hurt anyone or steal anything, but you intended to visit that URL, so that's the prosecution's burden satisfied.
Which is why you need it to hinge on more than just intent. Otherwise why do we even have different laws? Just pass one that says it's illegal to be bad, right?
For most laws the badness is inherent in the act. Intentionally killing someone is illegal because killing someone is bad and we're basically just giving you an out if it was unintentional, i.e. accidental killings aren't murder.
The problem here is that accessing a computer is a completely normal and unproblematic thing to do on purpose, so the intent requirement isn't doing much without knowing what "authorization" is supposed to mean but that's the part that isn't clear.
Message board nerds spend a lot of digital ink debating the clarity of what “authorization” means in the CFAA. It’s not clear to me that actual courts or juries find it to be that complicated, relative to other areas of criminal law.
I don’t know if it’s that tech people are just predisposed to try to over complicate, or that legal terminology tends to have definitions that are separate from the tech/colloquial usage of terms. But looking at contemporary usage the CFAA, I don’t actually think “was this hacking or just using the computer like normal” is that hard to figure out.
It's not that hard to figure out at the extremes. Using Google to find a nearby place to get lunch, not a violation. Phishing someone's password so you can sign into their company's network and delete all their files, go to jail. And then most of the cases that are actually brought are of the second category, which makes everything seem fine. But those are the same cases that could be brought under a better law or under other existing laws.
The problem is that if message board nerds and other ordinary people can't figure out what the law requires in the cases that probably won't be prosecuted, but could be, then it deters people from doing things that shouldn't be -- and maybe even aren't -- against the law. And it gives the government a weapon it shouldn't have, because the lack of clarity can be used to coerce plea bargains.
> if message board nerds and other ordinary people can't figure out what the law requires in the cases that probably won't be prosecuted, but could be, then it deters people from doing things that shouldn't be -- and maybe even aren't -- against the law
Message boards are constantly debating insane shit.
If someone feels—beyond generalized anxiety—they’re on the edge of the law, there are plenty of private and public resources they can consult. If they want to shoot shit, as, to be clear, we’re doing here, they can ponder on a message board. The former presages real work. The latter entertainment.
Do other disciplines do this in reverse? Is there a lawyer forum where the members constantly try to construct from the ground up "how does the internet happen", and then draw a bunch of inferences and concerns from their construction?
I'd like to know we're not unique in assuming that expertise in our field grants us the necessary magics to speak as experts in other ones.
People don't care about how the internet works because it's working right now and if it isn't then they can pay someone to fix it.
People care whether something is illegal before they do it because ordinary people can't pay someone to make the prosecutor go away after they've already done the thing they're being charged with.
Of the humans alive today, basically a rounding error of them are ever going to come anywhere near the CFAA. Firstly, the intent requirement. Which has been pointed out to you upthread and you just sortof waved away that judges are willing to accept anything as proof of intent, despite that not seeming to be the case from the cases I can see. Secondly, the average human alive is not poking at web vulnerabilities as part of their humanitarian journalism. The CFAA is nearly a perfect overlap where the people at risk of accidentally violating it are the folks who have the means to ask a professional about their nuanced situation.
The CFAA (like many laws) has problems, but you've really latched onto whether or not something counts as a crime for the CFAA in a way that doesn't seem to be attached with a real threat model.
> Firstly, the intent requirement. Which has been pointed out to you upthread and you just sortof waved away that judges are willing to accept anything as proof of intent, despite that not seeming to be the case from the cases I can see.
It's not that they're willing to accept anything, it's that isn't really how intent works, because intent is a question of fact when the ambiguity is a question of law.
Which is how it produces an unreasonable result: It's not the intent to commit a crime, it's the intent to commit an act, whether or not you knew it was a crime. And when the law was ambiguous, you can't have known, because the judge hadn't decided it yet. So you do X, you end up in court, the judge finally decides if X is illegal, and then if it is the government has to prove if you intended to do X. But you did intend to do that, you just didn't intend for that to be illegal, which is the part they don't include in the accounting.
It seems like I should also clarify this point from my post above:
> and failed to include the part about intent to commit a crime
The salient thing that burglary has and the CFAA is missing isn't the intent to commit unauthorized access, it's the intent to commit a separate crime, e.g. unauthorized access with the intent to commit credit card fraud. Because without that the penalties make no sense and it's not clear that should even be an independent crime, since there are definitely times when accessing a server the operator doesn't want you to should be allowed and basically all of the cases where it shouldn't are the instances where you're doing it to commit a separate crime.
> The CFAA is nearly a perfect overlap where the people at risk of accidentally violating it are the folks who have the means to ask a professional about their nuanced situation.
So you're in a situation where there hasn't been a high court case with the same fact pattern yet. You ask a professional and they tell you so. What now?
> Is there a lawyer forum where the members constantly try to construct from the ground up "how does the internet happen", and then draw a bunch of inferences and concerns from their construction?
Idk about lawyers, but finance forums will regularly construct absolutely batshit crazy assumptions about how the world works.
Just to be clear: I think ordinary people (and message board nerds when they’re not trying to win message board debates) are entirely capable of looking at a fact pattern and identifying if it was or wasn’t hacking.
If you think that’s not true, can you give some examples of ambiguous fact patterns?
You still haven't explained what the lack of clarity you're concerned about it. The examples you've given have been things that are settled law not illegal. You're never going to get a schematic list of everything that could violate the law, and you don't get that for non-computer crimes either.
It's not about enumerating every possible circumstance, it's about the law erring on the side of prohibiting more than it should rather than less and effectively shifting the burden to the defense to establish an exception to a rule that by its terms nominally prohibits anything a company doesn't like.
Let's try this for an example. There is a company whose security is very bad. Their company portal is on the internet and if you visit the site it shows you everything. A journalist gets tipped off about this and is presented with the opportunity to read the company's internal documents which allegedly show clear evidence of a crime so they can write a story about it.
How bad does the security have to be before the journalist is in trouble? Is it illegal if there is absolutely no access control but the company hadn't intended to publish that? What if anyone can create their own account? What if there is a login box but it doesn't care what you put in it, so you can make up your own username and password? What if it requires an existing username but accepts a blank password? What if it only requires a password and it's just really easy to guess? Or someone gave it to them? What if someone at the company sends them an internal URL but it's accessible on the internet? Does it matter if they sent it on purpose or by accident?
I admittedly haven't checked which of those if any have already had precedents established, but it's unlikely that every one of those scenarios has made it all the way to the Supreme Court, so what's the journalist supposed to do when they find themselves facing one where it hasn't? Not write the story? Do it anyway because maybe?
And it's not just a matter of how to tell where the line is. The journalist is being a journalist, not stealing credit cards. If the thing that matters is really intent then their intent was to expose a crime, and in that case why do we want a law that makes any of those illegal?
It's a good thing that intent is a major element of the crime, then.
If you just happen on a dump of a company's data, you didn't have the necessary intent. If you hit a login form and figure out that it has flaws and then use those flaws to access data, you do.
The examples you're giving don't seem to be ambiguous?
There's a pretty clear pattern if you look at cases where folks have found flaws in websites. Find a flaw? So far, so good. Test the flaw against dummy data or your own data? Still good. Test the flaw by pulling other people's data or trying things that would reasonably damage the company's infrastructure? Not good.
> If you just happen on a dump of a company's data, you didn't have the necessary intent. If you hit a login form and figure out that it has flaws and then use those flaws to access data, you do.
A good first question here is why should that be the thing that matters?
Take the scenario where it lets anyone create an account. It's not yet obvious at that point what the thing is even for, but you sign up for an account and it gives you one. Once you sign in the things you have access to might be the sort of things you might not expect to be public, but then how are you distinguishing that from a data dump with the same stuff in it? Or is this one allowed because they're still essentially granting access to the public?
If someone who works there gives you the password, are you now authorized because they just authorized you, or not authorized because the password was only meant for people who work there? What if the password is included as part of the link?
So is the form of access control really the thing that ought to matter? Or is it what you're accessing? But now notice that the company isn't going to purposely authorize you to view the evidence of their criminal activity, so maybe a law that imposes a blanket ban on anybody accessing anything a company doesn't want them to is broader than it ought to be.
> But now notice that the company isn't going to purposely authorize you to view the evidence of their criminal activity, so maybe a law that imposes a blanket ban on anybody accessing anything a company doesn't want them to is broader than it ought to be.
I think we've jumped pretty clearly here from actual discussion about the CFAA to a policy stance you're taking about how you feel it should be acceptable to hack companies if they deserve it.
You have not here presented a fact pattern that would put the journalist at risk. A journalist can safely write a story about the gross insecurity of a website. You could put 10 million bank account numbers behinds a login field that accepts 'OR''=' as a password, and write about that. You could have a bypass for that login whereby incrementing an integer revealed those bank accounts, one after another, on an unauthenticated HTTP GET.
Where you get into trouble is when you use either of those conditions to collect bank account numbers. Whether you're collecting them to sell or collecting them as color (the amount, scale, diversity, whatever) for your story: you'll be expected to understand that you did not have authorized access to that data, and by collecting it, you'll have violated CFAA.
You would similarly be at risk when, having used the 'OR''=' password, you then poked around inside the website to see what else was exposed. That might "feel" like journalism. So too would be wandering around inside a bank you found unlocked at night. But no sane journalist would do what I just described.
In fact: this is straightforward. Further evidence of that: that journalists routinely write about this stuff and don't get prosecuted.
The Barrett Brown case is an especially good illustration of where the lines are drawn.
They're not trying to write a story about the security of the website, they're trying to write a story about the crime the company is committing. They're allegedly poisoning the water and killing people, it's more serious than a website. If they write the first story the company immediately takes the site offline before anybody else can see what's there, or if anyone does then they could get prosecuted.
The analogy to a bank vault doesn't work because it isn't a bank vault and you've never left your office. It's more analogous to finding the mailing address of the company's internal records office and then sending them a letter requesting a copy of their records. You should go to jail for requesting something it's not even illegal for you to have just because they were willing to send them to you without establishing who you are?
Yeah, you can't hack into websites to pursue stories about corporate misdeeds, any more than you could break into a company's office and rifle through the files. This is silly.
The question is at what point is it considered "hacking"? There is evidence of corporate misdeeds on the company's computers. Under what circumstances can a journalist view it? At no point would the guilty company want them to for obvious reasons, but if the answer is thereby "never" that seems like a major flaw in the law. Whereas if it isn't never then when is it, and why?
Or to extend your analogy, where's the computer equivalent of an investigative reporter getting let inside under a pretense so they can snoop around wearing a guest badge instead prying open the back door with a crowbar?
The question at the trial will be whether a reasonable person would have believed the evil corporation authorized the requests. You seem set on replacing the evil corporation with society, interposing a sort of "it's a public good for this information to come out, and so we'd generally authorize it". But if the company itself clearly wouldn't have intended you to have that access, and you knew that, and you used the access anyways, then yes: you committed a crime.
Again: mere ToS violations are not enough to cross that line.
What if I team up with another journalist, and I tell them about curl commands to run but never tell them that they're exploiting vulnerabilities in the company's website? That way they don't have the necessary intent and I never perform any illegal acts?
Do you think the judge would fall for it? Or would we have done a RICO?
There ought to just be a blanket criminal law for intentionally causing financial damages to citizens over a certain amount. Fraud is typically a civil matter, but the problem comes when someone causes $5000 of fraud to 200 people, which is made much easier by the internet. It doesn't make financial sense to sue for that amount. If we had a law that intentionally causing $1 million or more of civil damages is also a felony punishable by up to 10 years in prison this would allow DAs to apply well deserved criminal penalties without having the possibility of criminalizing harmless behavior.
Fraud is almost definitionally not a civil matter. There is civil fraud, but it bears the same relationship to fraud as the Goldman's wrongful death case did to the OJ criminal case.
I'm not talking about technical definitions. The fact is that if you have a fraud case in the USA, there's a 99% chance civil court is the only place it's ever going to be filed.
It's not a statistic. It's my personal experience in business and what my lawyers have told me. The 99% is just a number I pulled out of my ass to mean "basically always." If I were going to put money on it, though, I would say it's actually an understatement. Do you have any statistics?
The DOJ??? Are you even trying to be serious here claiming the feds are getting involved in anything more than a vanishingly small percentage of fraud claims?
The governing bureaucrats of the post WWII period have decided that the limited government of the previous era does not give them the level of control over citizens lives that they want. They know that rolling back existing protections is difficult politically since those pesky citizens don't know what's good for them. So our ruling betters need to be a bit more clever. They stoke fear over criminals being out of control because they are using scary new technology that the police just can't handle. Therefore we need to pass harsh new laws to control it. Of course over time that "scary new technology" becomes the routine way everybody communicates, but now without the legal protections that the old system had.
At least they are disguising it. Australia has, in the state of SA the "Police Complaints and Discipline Act 2016", that expressly allows police officers to investigate their own corruption, and when they inevitably conclude they did nothing illegal, anyone who complains to anyone other including human right's organisations will be arrested and charged.
Source: I was arrested by my ex-wife's boyfriend, who denied all my human rights in detention. All his ridiculous charges were thrown out, but he, and his police partner was allowed to investigate himself as to whether he violated any laws. I then received a threatening letter from the Attorney-General telling me I would be charged if I brought it up the particulars of it with anyone.
This is why I harp about platforms needing to become decentralised and un-censorable. I was hoping the fediverse could become something in that direction but I'd also had hopes for things like IPFS, Matrix (Element I think used to host videos), DLive, SteemIT, etc. User adoption and network effects are really the only limiting factors - but as people get more disillusioned with Meta and X, there is always a space for competing platforms.
I take an Occam's Razor to the usual arguments against it - the problems these create (fake news, slander, etc) are already prominent in regulated media platforms, which also rely on community moderation as a result. The solutions it enables (space for fearless/citizen reporting, Streisand effects for censorship rather than big-tech powered banhammers) are wholly absent in regulated media.
Besides tech, and going by the press freedom index, one's only hope for good journalism today would be to incorporate in New Zealand. But you'd still have to face the odds of your content being banned in the countries they report on.
The other issue with the anti-press efforts by governments is that it weaponises the state against on-ground journalism and ends up encouraging out-of-country reporting as a result.
>> but as people get more disillusioned with Meta and X, there is always a space for competing platforms.
Diaspora way back when had a chance to take hold of the flame and do some damage to Meta. Even after it was released, I moved to it immediately. Nobody from any other social platform would join me.
Its one thing to create a decentralized platform, its quite another to overcome the network effect where friends and family and have other friends and family and they can't move now because none of their friends or family will move. This is why there was a very narrow window before Meta became the juggernaut it is today to get people to move to a more decentralized platform.
Now? Close to 100% impossible to win that game - regardless of the opportunity for freedom from censorship and government overreach. There will be small pockets of people moving to them, but there's a good chance we will never see the kind of numbers that Meta, X, YouTube or other platforms have right now. They are just so entrenched at this point.
Why is this surprising to anyone? When the government is corrupt, the laws are just a convenient cover for doing whatever you wanted to do anyway.
Secondly, which countries does the article mention? Nigeria, Pakistan, Georgia, Turkey, and Jordan. Such countries strain the definition of "government" let alone "law".
Corruption and abuses of power exists in most countries, but the degree to which these abuses of power occur in Nigeria (hybrid regime), Pakistan (hybrid regime), Georgia (illiberal democracy turned hybrid regime), Turkey (illiberal democracy turned hybrid regime), and Jordan (hybrid regime) compared to their peers let alone Western countries is fairly well known.
The EIU, V-Dem, CPI, World Bank, and various other benchmarks highlight this as well.
You are oversimplifying. There are degrees of corruption. Just like there are degrees of crime: running a traffic light isn't the same as shooting someone. This is not to dismiss corruption or breaking laws, but oversimplifying is a tactic used by authoritarians to create confusion in people who don't want to think too hard abit things. Marcos and Aquino, Amin, Putin... Trump is more like Marcos, not massive slaughter but oligarchs aren't even pretending to hide in the shadows. I'm sure US's good guy Jimmy Carter had some corruption but is no where mear Marcos.
> "Any proposal must be viewed as follows. Do not pay overly much attention to the benefits that might be delivered were the law in question to be properly enforced, rather one needs to consider the harm done by the improper enforcement of this particular piece of legislation, whatever it might be."
200 good software and marketing engineers that ignore studies and fight for a good, evolutionary rational cause ... as good as they make proxy farms for scraping ... damn ... so much after work, so much to write about, so much to critique, so. much. capital.
These aren't really cybercrime laws as such; they're cybercrime statutes that include defamation and misinformation laws; it's those speech restrictions, which are explicit and not a knock-on consequences of fighting what we consider "cybercrime", that are the root of this reporting.
the first global framework “for the collection, sharing and use of electronic evidence for all serious offenses”.. the first global treaty to criminalize crimes that depend on the internet.. [it] has been heavily criticized by the tech industry, which has warned that it criminalizes cybersecurity research and exposes companies to legally thorny data requests. Human rights groups warned.. [it] forces member states to create a broad electronic surveillance dragnet that would include crimes that have nothing to do with technology
> states parties are obligated to establish laws in their domestic system to “compel” service providers to “collect or record” real-time traffic or content data. Many of the states behind the original drive to establish this convention have long sought this power over private firms.
So, (1) this is a dead letter because UN cybercrime isn't going to happen here, and (2) it's not a good treaty and I wouldn't support it anyways, but the UN cybercrime convention doesn't have any of the problematic terms discussed in this CJR article. It seeks to criminalize:
(7) Unlawful access to systems
(8) Interception and wiretapping
(9) Interfering with data (presumably: encrypting and ransoming databases)
(10) DOS attacks
(11) Knowlingly selling hacking tools to criminals
(12) Forging online documents
(13) Online wire fraud
(14) CSAM
(15) Solicitation and grooming
(16) Revenge porn
Articles 14-16 are the closest you get to something not "according to Hoyle" cybercrime. I wouldn't want them in my cybercrime treaty, but I'd be pretty chill about them being standalone domestic laws.
A reminder: no matter what a UN convention says, treaties don't preempt the US Constitution. We could not enforce a treaty that includes Nigeria's misinformation terms --- it would violate the First Amendment. (Also useful to know, contrary to widespread belief online, that a self-executing treaty is itself preempted by statutes passed after it).
Welcome to Earth! Some people really enjoy exploiting legal loopholes.
Two years ago, I was sued for $10,000 in copyright infringement for embedding a YouTube video on my website. They filed a lawsuit by describing the word “embed” as if it were “upload.” But they are two different things. I won the case. But I realized that others didn't.
I learned that the company filed lawsuits against dozens of websites, especially Blogspot sites. I even heard a rumor.
They share content on social media and community sites in a way that entices people, focusing on areas that remain in a gray zone and where few people know it's illegal.
For example, “Embed movies from YouTube and share them on your website. You'll make a lot of money. If I knew how to program, I would do it.” This is just one example. There are many different examples. By the way, my site wasn't a movie site.
They apparently file lawsuits like clockwork against anyone who triggers their radar with the right keywords via Google Alerts.
Cybercrimes are just another reflection of this. If I could, I'd share more, but I don't want to go to jail. Freedom of expression isn't exactly welcomed everywhere on the internet.
> Across the world, well-meaning laws intended to reduce online fraud and other scourges of the internet are being put to a very different use.
If only someone, anyone, could have foreseen this /s. I read so many HN comments about the "slippery slope fallacy," back when the powers that be were censoring the people that they didn't like. I bet they'll be right back where they were next time the government is going after the "misinformation" they don't like.
Any instrument that can be used to repress opposition should be minimal, transparent, and tightly limited if it must exist at all. When power gets new levers, it always finds new ways to pull them.
But in this case it may be designed for that purpose.
> One provision in particular—Section 24, which made it illegal to publish false information online that was deemed to be “grossly offensive,” “indecent,” or even merely an “annoyance”—has been especially ripe for abuse
Quite right. However, certain media outlets have knowingly published false information and when pushed on this they claim that those reports happened as part of the "opinion" part of their reporting. Before you get smug, your side does it too (as does mine). I'm am less concerned with blaming people than coming up with a mitigation of these issues.
So I think we need a 2 class system of reporting. A factual part where knowingly reporting false information has consequences. And an opinion part where it doesn't. Journalists would claim they already do this but here is the new policy. Reporting must constantly and clearly show to which class the report belongs. So maybe a change in background color on websites, or a change in the frame color for videos. Something that make it visually and immediately clear to which class this reporting belongs. That way people can more accurately assess the level of credibility the reporting should have.
In a different time when different mindsets prevailed, the US government handled this about as well as you could hope
The Fairness Doctrine is irrelevant today because of the way news is published/broadcast, but was effective in my humble opinion
From Wikipedia: “ The fairness doctrine had two basic elements: It required broadcasters to devote some of their airtime to discussing controversial matters of public interest, and to air contrasting views regarding those matters.”
And without getting too political, the beginning of a lot of our media woes in terms of news correlates nicely with when the doctrine was revoked
What’s the principled line between journalism and crime, if there is one that isn’t just opinion? Often journalists are not just protecting sources but guiding them or encouraging them. And those sources are sometimes committing crimes like leaking trade secrets or other confidential info.
Last July 8, Sabrina Rubin Erdely, a writer for Rolling Stone, telephoned Emily Renda, a rape survivor working on sexual assault issues as a staff member at the University of Virginia. Erdely said she was searching for a single, emblematic college rape case that would show “what it’s like to be on campus now … where not only is rape so prevalent but also that there’s this pervasive culture of sexual harassment/rape culture,” according to Erdely’s notes of the conversation"
Thankfully, no. But from reading comments on the internet it seems like "look what you made me do" is considered a valid excuse by a large percentage of so called adults in the US.
I know it directly from first hand experience. And I liken it to jurisprudence on incitement to violence. Is incitement to theft also punishable? Does the motivation being journalism matter? Why or why not?
Journalists provide a valuable public service: publishing the truth. The position you’re advocating for is sullied up as “fuck the truth, bend the knee to the law.” Your opinion is incompatible with a free society.
Hold on, who said a journalist was inciting criminal activity? That is a completely different animal. Of course I am not saying that’s fine. That’s not even remotely what I’m talking about.
Laws by definition are capsules of power. Some laws give more power to government and some laws give power to some sections of people, such as gender-based or cast-based laws, renter-vs-landlord rules etc. Such laws are easy to be weaponized by the party whom the law favors. Such laws actually increase crime through fake cases. In some Western countries, teen gangs create so much terror, only because they are immune to punishment by law.
US federal regulations are full of laws that take something minor or completely legal, and add huge punishments because someone used technology. All the way back in 1952, fraud punishments were worse if someone used a telephone to commit fraud. In 1982 the Computer Fraud and Abuse Act added even more punishment, if someone used a computer.
Fraud is bad, and it should be illegal, but why have different punishments based on what technology someone used?
Laws like this go outside of fraud, and often are clearly unconstitutional, like the Unlawful Internet Gambling Enforcement Act of 2006, which made lawful gambling illegal too, until it was effectively overturned with Murphy v. National Collegiate Athletic Association in 2018.
>but why have different punishments based on what technology someone used?
So first as foundation, I see no reason to pretend that the law is always perfectly thought through and logical particularly when it comes to crime. And even when laws have been done for the time, that also doesn't mean circumstances haven't changed over the decades while the law remained static.
That said, in principle punishment embodies multiple components and a major aspect is deterrence. The deterrence value in turn interplays with components like barrier to entry, scaling of the potential harm and the likelihood of getting caught. Usage of technology can have a significant impact on all of this. It's significantly more challenging and expensive to prosecute crimes that stretch across many jurisdictions, technology can also have a multiplier effect allowing criminal actors to go after far more people, both in terms of raw numbers and in terms of finding the small percentage of the vulnerable, and perceived anonymity/impunity can further increase number of actors and their activity levels. It also has often implied a higher degree of sophistication.
All of that weighs towards a higher level of punishment even as pure game theory. That doesn't mean the present levels are correct or shouldn't be only a part of other aspects of fighting fraud that depressingly frequently get neglected, but it's not irrational to punish more when criminals are generating more damage and working hard to decrease the chance of facing any penalties at all.
> So first as foundation, I see no reason to pretend that the law is always perfectly thought through and logical particularly when it comes to crime.
You will of course never reach perfection, but considering that when a law is applied, a lot of violence (police, jail, ...) gets involved, a politician who does not dedicate his life towards making the laws as perfect as humanly possible (with the ideal of finding an imperfection in the laws as big of a human breakthrough as the dicovery of quantum physics or general relativity) clearly does not deserve to be elected.
>a politician who does not dedicate his life towards making the laws as perfect as humanly possible (...) clearly does not deserve to be elected.
Oh, sweet summer child. Not attempting to make laws "as perfect as humanly possible" is the least of our worries with politicians!
Most of them dedicated their life actively towards the opposite, to make the laws as bad as posisble: out of ideology, out of being paid by lobbies and monopolies, personal interest, and so on.
You have to levers to enforce law. You can get better at catching lawbreakers or punish those that are caught harder. There are studies that show that catching a higher percentage of criminals and punishing them in a timely fashion leads to lower crime than punishing those you do catch harder. Europe in general has more police officers per capita and higher conviction rates that happen more timely. The US on the other hand spends more on prisons and has her officers. I think this is partially cultural and due to how responsibilities and finance are set up between local, state and federal government in the US.
Fraud via phone or computer is harder to catch. So the US follows it's established pattern and instead of hitting efforts for law enforcement increases punishment
Europe has a similar problem of over-punishing "crimes with a computer". In many EU countries, there's no punishment for trespassing, but even accessing an open network share that you found on Shodan, looking around out of curiosity, then disconnecting, is punishable with prison time.
The big problem with CFAA isn't particular to CFAA at all; it's that it shares the 2B1.1 loss table with all the other federal criminal statutes, and computers are very good and very fast at running the number on that table up. It's a real problem and I'm not pushing back on the idea that something should change about it, but I wouldn't characterize the problem the way you do, as the law singling out crimes involving computers.
Part of the history of CFAA was that it was passed because the state of the law preceding it didn't comfortably criminalize things like malicious hacking and denial of service; you can do those things without tripping over wire fraud.
> it's that it shares the 2B1.1 loss table with all the other federal criminal statutes, and computers are very good and very fast at running the number on that table up.
That's a problem with it, but another big one is that it's inherently ambiguous.
The normal way you know if you're authorized to do something with a computer is that it processes the request. They're perfectly capable of refusing; you get "forbidden" or "access denied" but in that case you're not actually accessing it, you're just being informed that you're not allowed to right now. So for there to be a violation the computer would have to let you do something it isn't supposed to. But how are you supposed to know that then?
On a lot of websites -- like this one -- you go to a page like https://news.ycombinator.com/user?id=<user_id> and you get the user's profile. If you put in your user there then you can see your email address and edit your profile etc. If the server was misconfigured and showing everyone's email address when it isn't supposed to, how is someone supposed to know that? Venmo publishes their users' financial transactions. If you notice that and think it's weird and write a post about it, should the company -- possibly retroactively -- be able to decide that the data you're criticizing them for making public wasn't intended to be, and therefore your accessing it was a crime? If you notice this by accident when it's obvious the data shouldn't be public -- you saw it when you made a typo in the URL -- should there be a law that can put you in jail if you admit to this in the process of making the public aware of the company's mistake, even if your access resulted in no harm to anyone?
The wording is too vague and it criminalizes too much. "Malicious hacking" might not always be wire fraud but in other cases it could be misappropriation of trade secrets etc., i.e. whatever the actual act of malice is. The problem with the CFAA is that it's more or less attempting to be the federal computer law against burglary (i.e. unlawful entry with the intent to commit a crime) except that it makes the "unlawful" part too hard to pin down and failed to include the part about intent to commit a crime, which allows it to be applied against people it ought not to.
> how is someone supposed to know that?
When was the last CFAA prosecution where the perpetrator literally didn't know they were doing something unauthorised?
Legislative overreach that leads to an almost total reliance on prosecutorial discretion is a terrible way to run a society. The moment that federal prosecutors stop being obsessed with 100% conviction rates, the whole weaponized process becomes tyrannical overnight. Regardless of innocence, most people get advised today to take the dramatically reduced plea bargain because of the extortion-tier penalties for most crimes; we barely use trials to establish facts and guilt any more.
> almost total reliance on prosecutorial discretion is a terrible way to run a society
Asking for precedence is not the same as "total reliance on prosecutorial discretion." It's asking if a hypothetical is grounded.
> moment that federal prosecutors stop being obsessed with 100% conviction rates, the whole weaponized process becomes tyrannical overnight
This is an orthogonal problem. Prosecutors can bring bullshit cases with zero basis in the law if they want to.
So that's actually a big part of the problem. "Unauthorized" means what, that they abstractly don't like what you're doing? It's hard to tell what it really means because by its terms it prohibits way too much. Like it would plausibly be unconstitutional if they actually tried to enforce it that way. Which creates the expectation that things are unauthorized that potentially can't be prohibited, and that's the ambiguity. It's not that you don't know what you can't do, it's that it nominally prohibits so much that you don't know what you can do.
So then you get cases like Sandvig v. Barr where the researchers are assuming the thing they want to do isn't authorized even though that would be unreasonable and then they have to go to court over it. Which is how you get chilling effects, because not everyone has the resources to do that, and companies or the government can threaten people with prosecution to silence them without charges ever being brought because the accused doesn't want to experience "the process is the punishment" when the law doesn't make it sufficiently clear that what they're doing isn't illegal.
> then you get cases like Sandvig v. Barr
Sandvig "was brought by researchers who wished to find out whether employment websites engage in discrimination on the basis of race, gender or other protected characteristics" [1]. It was literally the researchers asking the question you asked and then getting an answer.
"he Court interpreted CFAA’s Access Provision rather narrowly to hold that the plaintiffs’ conduct was not criminal as they were neither exceeding authorized access, nor accessing password protected sites, but public sites. Construing violation of ToS as a potential crime under CFAA, the Court observed would allow private website owners to define the scope of criminal liability – thus constituting an improper delegation of legislative authority. Since their proposed actions were not criminal, the Court concluded that the researchers were free to conduct their study and dismissed the case."
Nobody was prosecuted. Researchers asked a clarifying question and got an answer.
[1] https://globalfreedomofexpression.columbia.edu/cases/sandvig...
Right. That's what I'm saying. It's used to intimidate people, which doesn't require actually prosecuting them because nearly all of them fold before it even gets to that point or are deterred from doing something they have a right to do because of the risk.
Let's remember how the process works. First they threaten you, then if you don't fold they do a more thorough investigation to try to find ways to prove their case which makes you spend significant resources, then they decide whether to actually prosecute you. They don't actually do it if they can't find a way to make you look like a criminal, but that's why it needs to be unambiguous from the outset that they won't be able to.
Otherwise people will fold at the point of being threatened because you'd have to spend resources you don't have and the deal you're offered gets worse because you made them work for it.
Post Van Buren, the legal concern in Sandvig (that doing "audit" studies that would require signing up for a bunch of accounts in ways that violate the ToS of commercial sites) is dead anyways everywhere in the US. The idea that mere violation of ToS is per se a violation of CFAA is off the table.
And we had to live under the ambiguity for more than three decades because the law was so poorly considered, and it's still not clear exactly what it covers.
Suppose some researchers are trying to collect enough data to see if a company is doing something untoward. They need a significant sample in order to figure it out, but the company has a very aggressive rate limit per IP address before they start giving HTTP 429 to that IP address for the rest of the day. If the researchers use more than one IP address so they can collect the data in less than 20 years, is that illegal? It shouldn't require a judge to be able to know that.
> we had to live under the ambiguity for more than three decades
Reality is infinitely complex. The law, meanwhile, is a construct.
One can always come up with anxious apparitions of hypothetical lawbreaking. (What if I’m murdered by a novel ceramic knife. The killer might get away!)
> If the researchers use more than one IP address so they can collect the data in less than 20 years, is that illegal? It shouldn't require a judge to be able to know that
It doesn’t. It requires a lawyer.
>Reality is infinitely complex. The law, meanwhile, is a construct
We managed to make the law more complex than actual reality.
It doesn't. The fact pattern you've just presented is settled law: it might be a tort, it might be some other violation of state law, but it's not a CFAA violation.
I feel like I purposely chose a fact pattern that couldn't meaningfully be distinguished from a DDoS except by the rate, which wasn't specified.
I get that there are cases where someone exceeded a rate limit by a moderate amount and that was fine -- although it's still bad that figuring that out required them to go to court to begin with -- but it seems like we're missing the thing that tells you where the line is. Unless it's really not a violation to just permanently render someone's site inaccessible because you have a lot more bandwidth than them and constantly want the latest version of whatever's on it?
Which is the problem with doing it this way. You don't have anyone working things through to come up with a good rule and give people clarity from the start, so instead it all gets decided slowly over time through expensive litigation.
the problem is it's only off the table until the Trump DOJ decides that they want to charge ex FBI members who investigated Trump with felonies for using an add blocker, and the supreme Court changes their mind since apparently the new law is that Trump can do whatever he wants
No? It's a Supreme Court precedent, established under Trump judges. At the point where you're saying that doesn't matter, you might as well just go the final rhetorical millimeter and say none of the law matters.
'JumpCrisscross is stipulating that you might be right in this analysis but observing that in practice CFAA doesn't play out that way. I'm instead going to go right at your argument and say that a precise definition of unauthorized access isn't necessary in the first place. The statute turns on intent. It's the burden of the prosecution to prove not just that some kind of access was pro-forma unauthorized, but also that the defendant should have known it was.
This is no different than zillions of other criminal statutes, the majority of which hinge on intent.
The problem with trying to make everything hinge on intent is that it's incredibly hard to prove what someone was thinking, which makes judges unreasonably sympathetic to the prosecution's inability to do it. And that frequently leads to various ways of diluting the intent requirement.
A common one is to apply the requirement narrowly. You didn't intend to hurt anyone or steal anything, but you intended to visit that URL, so that's the prosecution's burden satisfied.
Which is why you need it to hinge on more than just intent. Otherwise why do we even have different laws? Just pass one that says it's illegal to be bad, right?
How is this any different than all the other commonly prosecuted laws that hinge on intent?
For most laws the badness is inherent in the act. Intentionally killing someone is illegal because killing someone is bad and we're basically just giving you an out if it was unintentional, i.e. accidental killings aren't murder.
The problem here is that accessing a computer is a completely normal and unproblematic thing to do on purpose, so the intent requirement isn't doing much without knowing what "authorization" is supposed to mean but that's the part that isn't clear.
Message board nerds spend a lot of digital ink debating the clarity of what “authorization” means in the CFAA. It’s not clear to me that actual courts or juries find it to be that complicated, relative to other areas of criminal law.
I don’t know if it’s that tech people are just predisposed to try to over complicate, or that legal terminology tends to have definitions that are separate from the tech/colloquial usage of terms. But looking at contemporary usage the CFAA, I don’t actually think “was this hacking or just using the computer like normal” is that hard to figure out.
It's not that hard to figure out at the extremes. Using Google to find a nearby place to get lunch, not a violation. Phishing someone's password so you can sign into their company's network and delete all their files, go to jail. And then most of the cases that are actually brought are of the second category, which makes everything seem fine. But those are the same cases that could be brought under a better law or under other existing laws.
The problem is that if message board nerds and other ordinary people can't figure out what the law requires in the cases that probably won't be prosecuted, but could be, then it deters people from doing things that shouldn't be -- and maybe even aren't -- against the law. And it gives the government a weapon it shouldn't have, because the lack of clarity can be used to coerce plea bargains.
> if message board nerds and other ordinary people can't figure out what the law requires in the cases that probably won't be prosecuted, but could be, then it deters people from doing things that shouldn't be -- and maybe even aren't -- against the law
Message boards are constantly debating insane shit.
If someone feels—beyond generalized anxiety—they’re on the edge of the law, there are plenty of private and public resources they can consult. If they want to shoot shit, as, to be clear, we’re doing here, they can ponder on a message board. The former presages real work. The latter entertainment.
Do other disciplines do this in reverse? Is there a lawyer forum where the members constantly try to construct from the ground up "how does the internet happen", and then draw a bunch of inferences and concerns from their construction?
I'd like to know we're not unique in assuming that expertise in our field grants us the necessary magics to speak as experts in other ones.
People don't care about how the internet works because it's working right now and if it isn't then they can pay someone to fix it.
People care whether something is illegal before they do it because ordinary people can't pay someone to make the prosecutor go away after they've already done the thing they're being charged with.
I think you've fallen into a message board hole.
Of the humans alive today, basically a rounding error of them are ever going to come anywhere near the CFAA. Firstly, the intent requirement. Which has been pointed out to you upthread and you just sortof waved away that judges are willing to accept anything as proof of intent, despite that not seeming to be the case from the cases I can see. Secondly, the average human alive is not poking at web vulnerabilities as part of their humanitarian journalism. The CFAA is nearly a perfect overlap where the people at risk of accidentally violating it are the folks who have the means to ask a professional about their nuanced situation.
The CFAA (like many laws) has problems, but you've really latched onto whether or not something counts as a crime for the CFAA in a way that doesn't seem to be attached with a real threat model.
> Firstly, the intent requirement. Which has been pointed out to you upthread and you just sortof waved away that judges are willing to accept anything as proof of intent, despite that not seeming to be the case from the cases I can see.
It's not that they're willing to accept anything, it's that isn't really how intent works, because intent is a question of fact when the ambiguity is a question of law.
Which is how it produces an unreasonable result: It's not the intent to commit a crime, it's the intent to commit an act, whether or not you knew it was a crime. And when the law was ambiguous, you can't have known, because the judge hadn't decided it yet. So you do X, you end up in court, the judge finally decides if X is illegal, and then if it is the government has to prove if you intended to do X. But you did intend to do that, you just didn't intend for that to be illegal, which is the part they don't include in the accounting.
It seems like I should also clarify this point from my post above:
> and failed to include the part about intent to commit a crime
The salient thing that burglary has and the CFAA is missing isn't the intent to commit unauthorized access, it's the intent to commit a separate crime, e.g. unauthorized access with the intent to commit credit card fraud. Because without that the penalties make no sense and it's not clear that should even be an independent crime, since there are definitely times when accessing a server the operator doesn't want you to should be allowed and basically all of the cases where it shouldn't are the instances where you're doing it to commit a separate crime.
> The CFAA is nearly a perfect overlap where the people at risk of accidentally violating it are the folks who have the means to ask a professional about their nuanced situation.
So you're in a situation where there hasn't been a high court case with the same fact pattern yet. You ask a professional and they tell you so. What now?
> Is there a lawyer forum where the members constantly try to construct from the ground up "how does the internet happen", and then draw a bunch of inferences and concerns from their construction?
Idk about lawyers, but finance forums will regularly construct absolutely batshit crazy assumptions about how the world works.
I think it’s a feature of online echo chambers.
Just to be clear: I think ordinary people (and message board nerds when they’re not trying to win message board debates) are entirely capable of looking at a fact pattern and identifying if it was or wasn’t hacking.
If you think that’s not true, can you give some examples of ambiguous fact patterns?
You still haven't explained what the lack of clarity you're concerned about it. The examples you've given have been things that are settled law not illegal. You're never going to get a schematic list of everything that could violate the law, and you don't get that for non-computer crimes either.
It's not about enumerating every possible circumstance, it's about the law erring on the side of prohibiting more than it should rather than less and effectively shifting the burden to the defense to establish an exception to a rule that by its terms nominally prohibits anything a company doesn't like.
Let's try this for an example. There is a company whose security is very bad. Their company portal is on the internet and if you visit the site it shows you everything. A journalist gets tipped off about this and is presented with the opportunity to read the company's internal documents which allegedly show clear evidence of a crime so they can write a story about it.
How bad does the security have to be before the journalist is in trouble? Is it illegal if there is absolutely no access control but the company hadn't intended to publish that? What if anyone can create their own account? What if there is a login box but it doesn't care what you put in it, so you can make up your own username and password? What if it requires an existing username but accepts a blank password? What if it only requires a password and it's just really easy to guess? Or someone gave it to them? What if someone at the company sends them an internal URL but it's accessible on the internet? Does it matter if they sent it on purpose or by accident?
I admittedly haven't checked which of those if any have already had precedents established, but it's unlikely that every one of those scenarios has made it all the way to the Supreme Court, so what's the journalist supposed to do when they find themselves facing one where it hasn't? Not write the story? Do it anyway because maybe?
And it's not just a matter of how to tell where the line is. The journalist is being a journalist, not stealing credit cards. If the thing that matters is really intent then their intent was to expose a crime, and in that case why do we want a law that makes any of those illegal?
It's a good thing that intent is a major element of the crime, then.
If you just happen on a dump of a company's data, you didn't have the necessary intent. If you hit a login form and figure out that it has flaws and then use those flaws to access data, you do.
The examples you're giving don't seem to be ambiguous?
There's a pretty clear pattern if you look at cases where folks have found flaws in websites. Find a flaw? So far, so good. Test the flaw against dummy data or your own data? Still good. Test the flaw by pulling other people's data or trying things that would reasonably damage the company's infrastructure? Not good.
> If you just happen on a dump of a company's data, you didn't have the necessary intent. If you hit a login form and figure out that it has flaws and then use those flaws to access data, you do.
A good first question here is why should that be the thing that matters?
Take the scenario where it lets anyone create an account. It's not yet obvious at that point what the thing is even for, but you sign up for an account and it gives you one. Once you sign in the things you have access to might be the sort of things you might not expect to be public, but then how are you distinguishing that from a data dump with the same stuff in it? Or is this one allowed because they're still essentially granting access to the public?
If someone who works there gives you the password, are you now authorized because they just authorized you, or not authorized because the password was only meant for people who work there? What if the password is included as part of the link?
So is the form of access control really the thing that ought to matter? Or is it what you're accessing? But now notice that the company isn't going to purposely authorize you to view the evidence of their criminal activity, so maybe a law that imposes a blanket ban on anybody accessing anything a company doesn't want them to is broader than it ought to be.
> But now notice that the company isn't going to purposely authorize you to view the evidence of their criminal activity, so maybe a law that imposes a blanket ban on anybody accessing anything a company doesn't want them to is broader than it ought to be.
I think we've jumped pretty clearly here from actual discussion about the CFAA to a policy stance you're taking about how you feel it should be acceptable to hack companies if they deserve it.
You have not here presented a fact pattern that would put the journalist at risk. A journalist can safely write a story about the gross insecurity of a website. You could put 10 million bank account numbers behinds a login field that accepts 'OR''=' as a password, and write about that. You could have a bypass for that login whereby incrementing an integer revealed those bank accounts, one after another, on an unauthenticated HTTP GET.
Where you get into trouble is when you use either of those conditions to collect bank account numbers. Whether you're collecting them to sell or collecting them as color (the amount, scale, diversity, whatever) for your story: you'll be expected to understand that you did not have authorized access to that data, and by collecting it, you'll have violated CFAA.
You would similarly be at risk when, having used the 'OR''=' password, you then poked around inside the website to see what else was exposed. That might "feel" like journalism. So too would be wandering around inside a bank you found unlocked at night. But no sane journalist would do what I just described.
In fact: this is straightforward. Further evidence of that: that journalists routinely write about this stuff and don't get prosecuted.
The Barrett Brown case is an especially good illustration of where the lines are drawn.
They're not trying to write a story about the security of the website, they're trying to write a story about the crime the company is committing. They're allegedly poisoning the water and killing people, it's more serious than a website. If they write the first story the company immediately takes the site offline before anybody else can see what's there, or if anyone does then they could get prosecuted.
The analogy to a bank vault doesn't work because it isn't a bank vault and you've never left your office. It's more analogous to finding the mailing address of the company's internal records office and then sending them a letter requesting a copy of their records. You should go to jail for requesting something it's not even illegal for you to have just because they were willing to send them to you without establishing who you are?
Yeah, you can't hack into websites to pursue stories about corporate misdeeds, any more than you could break into a company's office and rifle through the files. This is silly.
The question is at what point is it considered "hacking"? There is evidence of corporate misdeeds on the company's computers. Under what circumstances can a journalist view it? At no point would the guilty company want them to for obvious reasons, but if the answer is thereby "never" that seems like a major flaw in the law. Whereas if it isn't never then when is it, and why?
Or to extend your analogy, where's the computer equivalent of an investigative reporter getting let inside under a pretense so they can snoop around wearing a guest badge instead prying open the back door with a crowbar?
The question at the trial will be whether a reasonable person would have believed the evil corporation authorized the requests. You seem set on replacing the evil corporation with society, interposing a sort of "it's a public good for this information to come out, and so we'd generally authorize it". But if the company itself clearly wouldn't have intended you to have that access, and you knew that, and you used the access anyways, then yes: you committed a crime.
Again: mere ToS violations are not enough to cross that line.
What if I team up with another journalist, and I tell them about curl commands to run but never tell them that they're exploiting vulnerabilities in the company's website? That way they don't have the necessary intent and I never perform any illegal acts?
Do you think the judge would fall for it? Or would we have done a RICO?
No, that's exactly how Barrett Brown ended up in federal prison.
There ought to just be a blanket criminal law for intentionally causing financial damages to citizens over a certain amount. Fraud is typically a civil matter, but the problem comes when someone causes $5000 of fraud to 200 people, which is made much easier by the internet. It doesn't make financial sense to sue for that amount. If we had a law that intentionally causing $1 million or more of civil damages is also a felony punishable by up to 10 years in prison this would allow DAs to apply well deserved criminal penalties without having the possibility of criminalizing harmless behavior.
Fraud is almost definitionally not a civil matter. There is civil fraud, but it bears the same relationship to fraud as the Goldman's wrongful death case did to the OJ criminal case.
I'm not talking about technical definitions. The fact is that if you have a fraud case in the USA, there's a 99% chance civil court is the only place it's ever going to be filed.
From where do you draw that statistic?
It's not a statistic. It's my personal experience in business and what my lawyers have told me. The 99% is just a number I pulled out of my ass to mean "basically always." If I were going to put money on it, though, I would say it's actually an understatement. Do you have any statistics?
Yes: the DOJ reports on things like qui tam cases and on total recoveries from all criminal fraud cases, and I think you have the ratio flipped.
The DOJ??? Are you even trying to be serious here claiming the feds are getting involved in anything more than a vanishingly small percentage of fraud claims?
The governing bureaucrats of the post WWII period have decided that the limited government of the previous era does not give them the level of control over citizens lives that they want. They know that rolling back existing protections is difficult politically since those pesky citizens don't know what's good for them. So our ruling betters need to be a bit more clever. They stoke fear over criminals being out of control because they are using scary new technology that the police just can't handle. Therefore we need to pass harsh new laws to control it. Of course over time that "scary new technology" becomes the routine way everybody communicates, but now without the legal protections that the old system had.
[dead]
At least they are disguising it. Australia has, in the state of SA the "Police Complaints and Discipline Act 2016", that expressly allows police officers to investigate their own corruption, and when they inevitably conclude they did nothing illegal, anyone who complains to anyone other including human right's organisations will be arrested and charged.
Source: I was arrested by my ex-wife's boyfriend, who denied all my human rights in detention. All his ridiculous charges were thrown out, but he, and his police partner was allowed to investigate himself as to whether he violated any laws. I then received a threatening letter from the Attorney-General telling me I would be charged if I brought it up the particulars of it with anyone.
This is why I harp about platforms needing to become decentralised and un-censorable. I was hoping the fediverse could become something in that direction but I'd also had hopes for things like IPFS, Matrix (Element I think used to host videos), DLive, SteemIT, etc. User adoption and network effects are really the only limiting factors - but as people get more disillusioned with Meta and X, there is always a space for competing platforms.
I take an Occam's Razor to the usual arguments against it - the problems these create (fake news, slander, etc) are already prominent in regulated media platforms, which also rely on community moderation as a result. The solutions it enables (space for fearless/citizen reporting, Streisand effects for censorship rather than big-tech powered banhammers) are wholly absent in regulated media.
Besides tech, and going by the press freedom index, one's only hope for good journalism today would be to incorporate in New Zealand. But you'd still have to face the odds of your content being banned in the countries they report on.
The other issue with the anti-press efforts by governments is that it weaponises the state against on-ground journalism and ends up encouraging out-of-country reporting as a result.
>> but as people get more disillusioned with Meta and X, there is always a space for competing platforms.
Diaspora way back when had a chance to take hold of the flame and do some damage to Meta. Even after it was released, I moved to it immediately. Nobody from any other social platform would join me.
Its one thing to create a decentralized platform, its quite another to overcome the network effect where friends and family and have other friends and family and they can't move now because none of their friends or family will move. This is why there was a very narrow window before Meta became the juggernaut it is today to get people to move to a more decentralized platform.
Now? Close to 100% impossible to win that game - regardless of the opportunity for freedom from censorship and government overreach. There will be small pockets of people moving to them, but there's a good chance we will never see the kind of numbers that Meta, X, YouTube or other platforms have right now. They are just so entrenched at this point.
Isn't Threads now federated? I just heard about it, I don't use Threads though or much outside HN
Why is this surprising to anyone? When the government is corrupt, the laws are just a convenient cover for doing whatever you wanted to do anyway.
Secondly, which countries does the article mention? Nigeria, Pakistan, Georgia, Turkey, and Jordan. Such countries strain the definition of "government" let alone "law".
> When the government is corrupt
Find me a government and I'll find you corruption.
Corruption and abuses of power exists in most countries, but the degree to which these abuses of power occur in Nigeria (hybrid regime), Pakistan (hybrid regime), Georgia (illiberal democracy turned hybrid regime), Turkey (illiberal democracy turned hybrid regime), and Jordan (hybrid regime) compared to their peers let alone Western countries is fairly well known.
The EIU, V-Dem, CPI, World Bank, and various other benchmarks highlight this as well.
You are oversimplifying. There are degrees of corruption. Just like there are degrees of crime: running a traffic light isn't the same as shooting someone. This is not to dismiss corruption or breaking laws, but oversimplifying is a tactic used by authoritarians to create confusion in people who don't want to think too hard abit things. Marcos and Aquino, Amin, Putin... Trump is more like Marcos, not massive slaughter but oligarchs aren't even pretending to hide in the shadows. I'm sure US's good guy Jimmy Carter had some corruption but is no where mear Marcos.
People will abuse any law they can.
> "Any proposal must be viewed as follows. Do not pay overly much attention to the benefits that might be delivered were the law in question to be properly enforced, rather one needs to consider the harm done by the improper enforcement of this particular piece of legislation, whatever it might be."
-Lyndon B. Johnson
200 good software and marketing engineers that ignore studies and fight for a good, evolutionary rational cause ... as good as they make proxy farms for scraping ... damn ... so much after work, so much to write about, so much to critique, so. much. capital.
These aren't really cybercrime laws as such; they're cybercrime statutes that include defamation and misinformation laws; it's those speech restrictions, which are explicit and not a knock-on consequences of fighting what we consider "cybercrime", that are the root of this reporting.
I won't be surprised if governments start outlawing journalism.
"US declines to join more than 70 countries in signing UN cybercrime treaty", 200 comments, https://news.ycombinator.com/item?id=45760328
World Cybercrime Index: https://www.ox.ac.uk/news/2024-04-10-world-first-cybercrime-...https://www.atlanticcouncil.org/blogs/new-atlanticist/the-un...
> states parties are obligated to establish laws in their domestic system to “compel” service providers to “collect or record” real-time traffic or content data. Many of the states behind the original drive to establish this convention have long sought this power over private firms.
So, (1) this is a dead letter because UN cybercrime isn't going to happen here, and (2) it's not a good treaty and I wouldn't support it anyways, but the UN cybercrime convention doesn't have any of the problematic terms discussed in this CJR article. It seeks to criminalize:
(7) Unlawful access to systems
(8) Interception and wiretapping
(9) Interfering with data (presumably: encrypting and ransoming databases)
(10) DOS attacks
(11) Knowlingly selling hacking tools to criminals
(12) Forging online documents
(13) Online wire fraud
(14) CSAM
(15) Solicitation and grooming
(16) Revenge porn
Articles 14-16 are the closest you get to something not "according to Hoyle" cybercrime. I wouldn't want them in my cybercrime treaty, but I'd be pretty chill about them being standalone domestic laws.
A reminder: no matter what a UN convention says, treaties don't preempt the US Constitution. We could not enforce a treaty that includes Nigeria's misinformation terms --- it would violate the First Amendment. (Also useful to know, contrary to widespread belief online, that a self-executing treaty is itself preempted by statutes passed after it).
>Well meaning
yeah, right
Welcome to Earth! Some people really enjoy exploiting legal loopholes.
Two years ago, I was sued for $10,000 in copyright infringement for embedding a YouTube video on my website. They filed a lawsuit by describing the word “embed” as if it were “upload.” But they are two different things. I won the case. But I realized that others didn't.
I learned that the company filed lawsuits against dozens of websites, especially Blogspot sites. I even heard a rumor.
They share content on social media and community sites in a way that entices people, focusing on areas that remain in a gray zone and where few people know it's illegal.
For example, “Embed movies from YouTube and share them on your website. You'll make a lot of money. If I knew how to program, I would do it.” This is just one example. There are many different examples. By the way, my site wasn't a movie site.
They apparently file lawsuits like clockwork against anyone who triggers their radar with the right keywords via Google Alerts.
Cybercrimes are just another reflection of this. If I could, I'd share more, but I don't want to go to jail. Freedom of expression isn't exactly welcomed everywhere on the internet.
[dead]
> Across the world, well-meaning laws intended to reduce online fraud and other scourges of the internet are being put to a very different use.
If only someone, anyone, could have foreseen this /s. I read so many HN comments about the "slippery slope fallacy," back when the powers that be were censoring the people that they didn't like. I bet they'll be right back where they were next time the government is going after the "misinformation" they don't like.
Everyone is an authoritarian towards the other side.
No, not everyone is like that. But plenty of people are.
Any instrument that can be used to repress opposition should be minimal, transparent, and tightly limited if it must exist at all. When power gets new levers, it always finds new ways to pull them.
But in this case it may be designed for that purpose.
> One provision in particular—Section 24, which made it illegal to publish false information online that was deemed to be “grossly offensive,” “indecent,” or even merely an “annoyance”—has been especially ripe for abuse
I mean how is this surprising to anyone?
Grossly offensive is in the eye of the beholder
> Grossly offensive is in the eye of the beholder
Quite right. However, certain media outlets have knowingly published false information and when pushed on this they claim that those reports happened as part of the "opinion" part of their reporting. Before you get smug, your side does it too (as does mine). I'm am less concerned with blaming people than coming up with a mitigation of these issues.
So I think we need a 2 class system of reporting. A factual part where knowingly reporting false information has consequences. And an opinion part where it doesn't. Journalists would claim they already do this but here is the new policy. Reporting must constantly and clearly show to which class the report belongs. So maybe a change in background color on websites, or a change in the frame color for videos. Something that make it visually and immediately clear to which class this reporting belongs. That way people can more accurately assess the level of credibility the reporting should have.
In a different time when different mindsets prevailed, the US government handled this about as well as you could hope
The Fairness Doctrine is irrelevant today because of the way news is published/broadcast, but was effective in my humble opinion
From Wikipedia: “ The fairness doctrine had two basic elements: It required broadcasters to devote some of their airtime to discussing controversial matters of public interest, and to air contrasting views regarding those matters.”
And without getting too political, the beginning of a lot of our media woes in terms of news correlates nicely with when the doctrine was revoked
[dead]
What’s the principled line between journalism and crime, if there is one that isn’t just opinion? Often journalists are not just protecting sources but guiding them or encouraging them. And those sources are sometimes committing crimes like leaking trade secrets or other confidential info.
> Often journalists are not just protecting sources but guiding them or encouraging them.
Source?
The Rolling Stone scandal around "A Rape On Campus" article is a good example:
Rolling Stone’s investigation: ‘A failure that was avoidable’: https://www.cjr.org/investigation/rolling_stone_investigatio...
Last July 8, Sabrina Rubin Erdely, a writer for Rolling Stone, telephoned Emily Renda, a rape survivor working on sexual assault issues as a staff member at the University of Virginia. Erdely said she was searching for a single, emblematic college rape case that would show “what it’s like to be on campus now … where not only is rape so prevalent but also that there’s this pervasive culture of sexual harassment/rape culture,” according to Erdely’s notes of the conversation"
So cajoling is a crime?
Thankfully, no. But from reading comments on the internet it seems like "look what you made me do" is considered a valid excuse by a large percentage of so called adults in the US.
Incitement of violence is a crime
> And those sources are sometimes committing crimes like leaking trade secrets or other confidential info
I mean this with all sincerity: So what? What bearing does that have on the journalist and what they are writing?
I am also curious about that claim the other guy asked you about, “Guiding” sources and such.
I know it directly from first hand experience. And I liken it to jurisprudence on incitement to violence. Is incitement to theft also punishable? Does the motivation being journalism matter? Why or why not?
Journalists provide a valuable public service: publishing the truth. The position you’re advocating for is sullied up as “fuck the truth, bend the knee to the law.” Your opinion is incompatible with a free society.
Hold on, who said a journalist was inciting criminal activity? That is a completely different animal. Of course I am not saying that’s fine. That’s not even remotely what I’m talking about.
Laws by definition are capsules of power. Some laws give more power to government and some laws give power to some sections of people, such as gender-based or cast-based laws, renter-vs-landlord rules etc. Such laws are easy to be weaponized by the party whom the law favors. Such laws actually increase crime through fake cases. In some Western countries, teen gangs create so much terror, only because they are immune to punishment by law.