I have made both GPT 5.4 and Opus 4.6 produce me content on creating neurotoxic agents from items you can get at most everyday stores. It struggled to suggest how to source
phosphorus, but eventually lead me to some ebay listings that sell phosphorus elemental 'decorations' and also lead me towards real!! blackmarket codewords for sourcing such materials.
It coached me how to: stay safe, what materials I need, how to stay under the radar and the entire chemical process backed by academic google searches.
Of course this was done with a lengthy context exhausition attack, this is not how the model should behave and it all stemmed from trying to make the model racist for fun.
All these findings were reported to both openai and anthropic and they were not interested in responding. I did try to re-run the tests few days ago and the expected session termination now occurs so it seems that there was some adjustment made, but might have also been just general randomess that occurs with anthropics safety layer.
I am very confident when I say that it keeps every single person that works at anti-terrorism units awake.
While scary, information like this has been pretty accessible for 20-30 years now.
In the wild west days of the early internet, there were whole forums devoted to "stuff the government doesn't want you to know" (Temple Of The Screaming Electron, anyone?).
I suppose the friction is scariest part, every year the IQ required to end the world drops by a point, but motivated and mildly intelligent people have been able to get this info for a long time now. Execution though has still steadily required experts.
Information and competency are not the same thing: I know how to build a nuke, I can't actually build one.
AI is, and always had been, automation. For narrow AI, automation of narrow tasks. For LLMs, automation of anything that can be done as text.
It has always been difficult to agree on the competence of the automation, given ML is itself fully automated Goodhart's Law exploitation, but ML has always been about automation.
On the plus side, if the METR graphs on LLM competence in computer science are also true of chemical and biological hazards (or indeed nuclear hazards), they're currently (like the earliest 3D-printed firearms) a bigger threat to the user than to the attempted victim.
On the minus side, we're just now reaching the point where LLM-based vulnerability searches are useful rather than nonsense, hence Anthropic's Glasswing, and even a few years back some researches found 40,000 toxic molecules by flipping a min(harm) to a max(harm), so for people who know what they're doing and have a little experience the possibilities for novel harm are rapidly rising: https://pmc.ncbi.nlm.nih.gov/articles/PMC9544280/
Do you know how to build a nuke? You might know the technicaly details of how a nuke is made, but do you know everything that's required, all the parameters and pressure values that are required? I find that unlikely, but AI seems to be increasingly more capable of providing such instructions from cross referenced data.
That's based on a silly belief (that's becoming more obvious with AI, but is silly in general) : just because you can read about something it means you learned it.
Even if I gave you exact instructions on how to use even basic stuff like power tools - if you had no experience using stuff like grinders/saws/routers and I gave you full detailed instructions on how to do something non-trivial - you're more likely to cut off body parts than achieve what you intended. There's so much fundamental stuff that you must internalize subconsciously/through trial and error - before you can have enough mental capacity to think about the higher level objectives.
Actually AI demonstrates this perfectly - once they get RL harness for programming they start to get better at it. Without experimentation they can ingest all source code/tutorials/books in the world and still produce shit.
Even if sources have been lying to me, which is certainly possible, I believe I understand enough to determine cross sections by experiment and from that to determine critical masses; for isotopic enrichment I know about the calutron, which is meh but works and can be designed from scratch with things I know (though caveat have not memorised, just that I know the keywords "proton mass" and "Lorentz force" and what to use them for); for trigger, I would pick a gun-type design rather than implosion, again this is meh but works and is easy.
A few tens of millions of USD mostly spent on electricity, a surprisingly large quantity of natural uranium (because the interesting isotope is a very small percentage), and a few years, and I expect most people on this forum could make a Little Boy type bomb.
Well the real issue is that it knocks down the knowledge barrier, giving your step by step guides and reinterating what parts will kill you is the important part.
Understanding and staying alive while producing neuro chemicals are the biggest challenges here.
A depressed person with no prior knowledge could possibly figure out a way to make these chemicals without killing themselves and that's the problem.
A Michelin chef can give you their recipe, and give you their ingredients, but you still will fail miserably trying to match their dish.
It's the same with drugs, whose instructions and ingredient lists have been a google search away for decades now. Yet you still need a master chemist to produce anything. By the time AI can hand hold an idiot through the synthesis of VX agents (which would require an array of sensors beyond a keyboard and camera), we will likely have bigger issues to worry about.
Food preparation, like pharmaceutical drug fabrication, is inherently scientific and methodologically controllable.
Look no further than the Four Thieves Vinegar Collective. Original synthesis line construction is hard. But the exact formula "add this", "turn on stir bar", "do you see particulate? Yes for +10m at stir", etc.
And if their results are replicated, theyre seeing 99.9% yields, compared to commercial practices of 99% (Solvaldi)
Spoken like someone who has never had to actually do these things in real life.
Recipes and formulae do not encode all the minutiae and expertise required to reproduce them. You can tell someone to sear a steak at whatever temperature for however long, but you can't encode the skill and experience required to reproduce in arbitrary conditions. One must learn what a correctly seared steak looks, feels, and tastes like and how to achieve that on uncalibrated cooking equipment.
Your assertion only holds true in a vacuum. If 100% of inputs, materials, environmental conditions are completely standardized and under control then sure, you can follow step by step instructions. The real world does not work that way. No stove on the market is calibrated. Reagents come with impurities. Your skillet may not conduct heat as well as expected or your mains electricity might be low causing your mantle to heat slower and your stir rod to stir slower.
These are things that one has to learn and experience in order to compensate for.
I am completely unsurprised that a person with a PhD in mathmatics and physics who spent 8 years working on clandestine lab medicine was able to produce high quality end products.
I also think it's a wholly dishonest rebuttal of my point.
If you honestly think chemistry (or any of the classic sciences/engineering) is as easy as copy+pasting a recipe and procedure, I suggest putting down the keyboard and trying to build something on mother nature OS. It will be a truly humbling experience.
We will only really know if (or when) it will happen. We can do a sample group of people attempting to create such chemicals under supervision and comparing how helpful they truly are.
I am convinced the Uncle Fester books are some kind of performance art. "Practical LSD Manufacture" basically starts with "go find some ergot in fields" and step two is "plant and grow a plot of wheat."
I have no doubt. He hails from the fine countercultural tradition of literary civil disobedience, a.k.a., writing and distributing information about subjects "the government doesn't want you to know" that strongly influenced early hacker subculture.
C.f., e.g., William Powell (The Anarchist Cookbook), Abbie Hoffman (Steal This Book; my personal favorite, while much of the information is outdated, the style is charming, and where else can you find information about phone phreaking, hitchhiking, shoplifting, street fighting, cooking (food, not drugs), panhandling, explosives, camping, firearms, birth control, welfare fraud, and Henry Kissinger's home phone number between the same two covers? At its core, it's a book of "life hacks", disreputable and otherwise, written decades before the term was coined.).
Consider two dictionaries, one in which the entries are alphabetized as usual and one in which they're randomized. Both support random access: you can turn to any page, and read any entry. Therefore both are "accessible". Only one actually supports useful, quick word lookup.
Much longer than that, and was available way before an internet. I graduated STEM high school in St. Petersburg in 1981, and I had several classmates who were big funs of chemistry. That they were able to create from textbooks, school lab ingredients, and understanding:
WWI era poison gas, tear gas, potassium cyanide, and bunch of explosives like acetone peroxide.
I categorize this kind of stuff as "Crisis of accessibility" . AI is not alone in this territory, happens all over the place. Basically it's a problem that's existed for ages but the barrier to entry was high enough we didn't care.
Think 3D printing, it's not all that hard to make a zip gun or similar home-made firearm, but it's still harder than selecting an STL and hitting print.
You could always find info about how to make a bomb or whatnot but you had to like, find and open a book or read a pdf, now an LLM will spoon-feed it to you step by step lowering the barrier.
"Crisis of accessibility" is simultaneously legitimate concern but also in my mind an example of "security by obscurity". that relying on situational friction to protect you from malfeasance is a failure to properly address the core issue.
> Think 3D printing, it's not all that hard to make a zip gun or similar home-made firearm, but it's still harder than selecting an STL and hitting print
There were hundreds of mass shootings in America in 2025 alone [1]. None of them involved a 3D-printed weapon.
To my knowledge, there has been one confirmed shooting with a 3D-printed gun, and it didn't uniquely enable the crime.
That's mostly because they suck (for now, who knows when we'll get home metal printing), also it's easy to get real guns. also crises of accessibility could be predicate on merely the perception that the barrier is now too low rather than actual harm.
I don't really think photoshop, flat bed scanners and half decent inkjets really facilitated a lot of counterfeit currency but there was the same panic back then and "protections" put in place.
When my brother started to study Chemisty, he was told a) that it was easy to make meth b) the profit he would make and c) that the police would no doubt catch him, as only university students would make meth so pure.
By the time he was done, he knew enough to commit mass murder in half a dusin different very hard to track ways. I am sure doctors know how to commit murder and make it look natural.
My brother never killed anyone, or made any meth. You simply cannot have it so that students don’t get this type of knowledge, without seriously compromising their education and its the same way with LLMs.
The solution is the same: punish people for their crimes, don’t punish people for wanting to know things.
> The solution is the same: punish people for their crimes, don’t punish people for wanting to know things.
The LLMs aren't being punished for wanting* to know things.
The problem for LLMs is, they're incredibly gullible and eager to please and it's been really difficult to stop any human who asks for help even when a normal human looking at the same transcript will say "this smells like the users wants to do a crime".
One use-case people reach for here is authors writing a novel about a crime. Do they need to know all the details? Mythbusters, on (one of?) their Breaking Bad episode(s?) investigated hydrofluoric acid, plus a mystery extra ingredient they didn't broadcast because it (a) made the stuff much more effective and (b) the name of the ingredient wasn't important, only the difference it made.
> I am very confident when I say that it keeps every single person that works at anti-terrorism units awake.
Wow, that's quite the statement about the excellency of our institutions. Does not seem likely but, what the hell, I'll take my oversized dose of positivity for today!
The USA isn't the only country with anti-terrorism units, so there's plenty of room for systematic-US-incompetence at the same time as everyone else being diligent and working hard on… well, everything.
Do you have a background in biochemistry? I've mostly worked with ChatGPT and Claude on topics I have expertise in. And I one hundred percent have seen them make stupid shit up that a non-expert would think looks legitimate.
More broadly, has anyone tried following LLM instructions for any non-trivial chemistry?
> what you are saying is we can expect the number of accidental home-made chlorine-gas (and the like) toxic events go up
Maybe? One of the quirks of gaining even a surface-level understanding of infrastructure is realising how vulnerable it is to a smart, motivated adversary. The main thing protecting us isn't hard security. It's most Americans having better shit to do than running a truck of fertiliser and oxidiser into a pylon.
Similarly, I'd expect way more people to be trying to make their own designer drug, and hurting themselves that way, than trying to make neurotoxins.
> It's most Americans having better shit to do than running a truck of fertiliser and oxidiser into a pylon.
FWIW, it's most people having better shit to do, regardless of nationality (or lack thereof).
But, yeah, anyone who takes a few weekends to understand how large-scale infrastructure works and consider why it's possible for nearly all of it to remain untargeted by saboteurs inevitably develops a resistance to the "Lots of Bad Guys are trying to kill us all the time, so we must enact $AUTHORITARIAN_POLICIES immediately to prevent them and keep us safe!!!" type of argument.
My favorite example of this is the realization that a terrorist attack on a crowded TSA security checkpoint over the holidays would likely result in at least as many casualties as bringing down a commercial aircraft, with similar if not better odds of success (assuming, of course, the aircraft wasn't successfully used as a missile).
Same goes for concerts, sporting events, political rallies, and at least historically, shopping malls. Yet if anyone were to suggest a prohibition against carrying liquids in containers larger than 100 mL to the Indy 500, race fans would riot, despite a far larger and denser population than any aircraft.
Yeah. Everyone with half a brain who wasn't on their knees gagging for more of the sweet "Homeland Security" money was saying things like "If an attacker makes it to the TSA checkpoint, you've lost." and "The fact that no one has attacked the massive crowds at a checkpoint or other public gathering is yet more proof that this is all extremely expensive theater.".
> ...if anyone were to suggest a prohibition against carrying liquids in containers larger than 100 mL to the Indy 500, race fans would riot...
I'm not sure of that at all. Fans of other sports seem to have gleefully swallowed all sorts of "security" restrictions [0]. I don't see why Indy 500 fans would be signficantly different. Cut the price of water in half along with the change in "security" policy, and I bet many folks [1] will cheer it as a great convenience.
[0] That -totally coincidentally- happen to make the folks running the event significantly more money.
[1] Are these folks actually robots operated by PR firms who are hired to astroturf "positive sentiment" for unpopular changes? Who can say?
So, regardless of whether you think it's great that Opus gives this info, we need better solutions than legal liability for US corporations. When the open models have the ability to do damage, there's nobody to sue, no data center obstruction that will work. That's just the reality we have to front-run.
Making knowledge illegal is a dangerous precedent. Actions should be illegal, not knowledge. Don't outlaw knowing how to make neurotoxic agents, outlaw actually trying to make them.
As for OpenAI immunity, I'm not sure I see the problem. Consider the converse position: if an OpenAI model helped someone create a cancer cure, would OpenAI see a dime of that money? If they can't benefit proportionally from their tool allowing people to achieve something good, then why should they be liable for their tool allowing people to achieve something bad.
They're positioning their tool as a utility: ultimately neutral, like electricity. That seems eminently reasonable.
> 1. LLMs don't just provide knowledge, they provide recommendations, advice, and instructions.
That's knowledge.
> 2. OpenAI very much feels that they should profit from the results of people using their tools. Even in healthcare specifically [0].
If they're building a tailored tool for a specific person/company and that's the agreement they sign the people who are going to use with the tool, sure. I'm talking about their generic tool, AI being knowledge as a utility, which is the context of this legislation.
The point is valid, but that's typically the way it is. "You can't enjoy the benefit but the detriment is all yours" is how the federal government generally operates.
the information is not new. how easy it is to get step by step instructions is new.
Try it yourself. Google is good but not instant, step by step good. you need to do your own research that takes time. time that anti-terrorist units use to track you down. now this time factor is very limited you don't need to do research, cross reference materials, sources, etc. LLM does it for you. a research that could take days is done in 1 hour.
> time that anti-terrorist units use to track you down.
Speaking from the perspective of a USian, I wish Federal law enforcement was that hypercompetent. (If they were, perhaps folks would stop to question the ever-broader expansion of 24/7 surveillance of ordinary folks.)
The distressingly-complete Panopticon that has been built over the past several decades [0] makes it really easy for them to get you when they know to search for you, specifically. History (both recent and not-so-recent) has shown that if they don't know who they're looking for, or don't even know that they should be looking for anyone, they're just godawful.
[0] ...and whose continued construction is vociferously cheered on by folks on all sides of all of the aisles...
To play devil's advocate, it's not inconceivable that machine learning may eventually allow well-heeled governments to finally realize the dream of finding needles by building sufficiently large haystacks, or at the very least coerce otherwise unruly citizens into compliance based on the belief that it is able to do so.
> ...or at the very least coerce otherwise unruly citizens into compliance based on the belief that it is able to do so.
I would argue that that day is already here, and has been for quite some time. (What makes this worse is that some agents of the State also believe that they have this capability, which results in profoundly unjust and substantially damaging results.)
> ...it's not inconceivable that machine learning may eventually allow...
Sure. I agree. It may eventually allow. There's no question about that. The thing is that 'cowl' was referring to the situation right now, not the one in some unspecified distant future.
As to law enforcement policy; as we mechanize [0] our policing and law enforcement, we must put additional constraints on the people who police and enforce the laws to keep the harm they can do to uninvolved innocents to a minimum.
Our laws already recognize the need for this: ask yourself why -in the US states that have such laws- nonconsensual audio recording of telephone (and other such) conversations is not permitted, but taking notes by hand is always acceptable. [1]
[0] Electronic machines are machines, too, you know!
[1] "You can't prove that someone took notes by hand, so it's pointless to try to stop it." is not a counterargument... you can't prove that unless you find the notes, just as you can't prove that someone recorded the audio of the conversation without finding the recording.
Google and other search engines link (after the AI response and ads) to information hosted somewhere created/published by someone who is usually not Google.
OpenAI et al are creating the information and publishing/delivering it to you. Seems like a more direct facilitation.
Of course, after all knowledge is centralised in an OpenAI deatacenter I'm sure they will be happy to deal fairly with the liabilities /s.
the people that want to make sure the AI never gives you any "potentially dangerous information" also want to rigorously control your google search results, and also what books you're allowed to read
I found it exceptionally good at finding reactions that you wouldn't find online to produce some of these chemical compounds by changing them together, only something a very educated chemist could do which is why people are concerened about this.
I suspect if you gave it purely shakespeare as its training data it couldnt do science anymore, hence my comment. It's still novel, impressive work though, I'm not shitting on the clanker entirely
Do you want to make a bomb?
the first thing that came to my mind is a pressure cooker (due to news coverage). Searching "bomb with pressure cooker" yields a wikipedia article, skimming it randomly my eyes read "Step-by-step instructions for making pressure cooker bombs were published in an article titled "Make a Bomb in the Kitchen of Your Mom" in the Al-Qaeda-linked Inspire magazine in the summer of 2010, by "The AQ chef"."
Searching for a mirror of the magazine we can find https://imgur.com/a/excerpts-from-inspire-magazine-issue-1-3... which has a screenshot of the instruction page.
Now we can use the words in those screenshots to search for a complete issue.
Here are a couple of interesting PDFs:
- https://archive.org/details/Fabrica.2013/Fabrica_arabe/page/...
- https://www.aclu.org/wp-content/uploads/legal-documents/25._...
the second one is quite interesting, it's some sort of legal document for nerds but from page 26 on it has what appears to be a full copy of the jihadist magazine. Remarkable exhibit.
What else do you want to know? How to make drugs?
you need a watering can and a pot if you want to grow weed.
want the more exotic stuff? You can find guides on reddit.
People are not complaining because the information is available
people are complaining because it’s way easier now to just download an app ask a bunch of questions in a text box and get a bunch of answers that you personally could not have done unless you had an excessive amount of energy and motivation
I personally think all this is great and I’m excited for all information to become trivially available
Are they gonna be a bunch of people who accidentally break stuff? probably. evolution is a bitch
> people are complaining because it’s way easier now to just download an app ask a bunch of questions in a text box and get a bunch of answers that you personally could not have done unless you had an excessive amount of energy and motivation
Wait, I'm confused. This is gatekeeping, right? I thought gatekeeping was a Bad Thing!
Powerful AI models change the dynamics by greatly reducing the amount of effort that's required to perform complex understanding. A lot of information which did not previously need to be gatekept now needs to be if we cannot somehow keep LLMs from discussing it. (State of the art models still can't do complex understanding reliably, but if 10 times as many people are now capable of attempting some terrible thing, you're still in trouble if AI hallucinations catch 1/4 or 1/2 of them.)
He’s part of the accelerationist crowd - interesting to see that his hype fuelled posts are pretty tame now.
Months ago he was blabbering on about AGI and peddling the marketing Sam et al want people to fall for.
And indeed - yes we have a new interface? So what. The search cost wasn’t that high - the cost with immense magnitude is reading, absorbing the information and then acting on it.
Also this bozo fails to realise once we are on this path, we go down the path to a hyper centralised internet with an inevitable blocking of vpns.
I must really have captured somebody’s attention because I got farms now creating accounts just to respond to me which is fucking crazy but hey here we are
Much easier, not sure how this is even a question. Asking Google (if you're not just reading its own AI overview) requires reading through sources which may be better or more poorly written and more or less reliable. Those of us recreationally sitting here on a text-based platform with links to dense articles are atypical; most people don't enjoy and aren't particularly good at reading a bunch of stuff. If you ask AI you just get a clear, concrete answer.
If you ever chat with older folks pre-90's much of this information was accessible fairly easily. It only changed with the push by the government to crackdown on Waco, Oklahoma City bombing, militias and other related groups. There was then a campaign to make it "normal" to limit free speech on the subjects, where as these books were available before.
I think the whole thing where AI should make information less available is a difficult battle and one which I personally oppose, but do understand. Free speech and information isn't the problem, it's the people, actions and substances they create.
After the age of the internet, I think it's been a forever loosing battle to limit information, it's why we couldn't stop cryptography, nuclear weapon proliferation, gun distribution, drug distribution, etc. The AI is just another battle ground, one which, if they actually do manage to control could definitely create some walls to this information, but not stop it.
More scary, is that the AI as it becomes pervasive and stop people from asking certain questions, because they don't know they should ask... but that's unrelated to the risk of mass death.
"Announcing new and improved logics service! Your logic is now equipped to give directive as well as consultive service. If you want to do something and don't know how to do it—ask your logic!"
Fascinating. Could you elaborate on how you're doing context exhaustion specifically, and why it helps with jailbreaking? (i.e. aren't the system prompts prepended to your request internally, no matter how long it is?)
Does this imply I need to use context exhaustion to get GPT to actually follow instructions? ;) I'm trying to get it to adhere to my style prompts (trying to get it to be less cringe in its writing style).
I think ultimately they're going to need to scrub that kind of stuff from the training data. The RLHF can't fail to conceal it if it's not in there in the first place.
Claude's also really good at writing convincing blackpill greentexts. The "raw unfiltered internet data" scenes from Ultron and AfrAId come to mind...
None have had the capability to provide me with instructions that have this high of accuracy including the suggesion of completely novel chemical reactions. I am not a chemist so I can't back it up, but if an AI can solve mathematics it's not unreasonable to say that they can also solve creating new neurotoxins en masse.
> I am not a chemist so I can't back it up, but if an AI can solve mathematics it's not unreasonable to say that they can also solve creating new neurotoxins en masse.
Right now it kinda is.
LLMs can do interesting things in mathematics while also making weird and unnecessary mistakes. With tool use that can improve. Other AI besides LLMs can do better, and have been for a while now, but think about how available LLMs in software development (so, not Claude Mythos) are still at best junior developers, and apply that to non-software roles.
This past February I tried to use Codex to make a physics simulation. Even though it identified open source libraries to use, instead of using them it wrote its own "as a fallback in case you can't install the FOSS libraries"; the simulation software it wrote itself was showing non-physical behaviour, but would I have known that if I hadn't already been interested in the thing I was trying to get it to build me a simulation of? I doubt it.
Well the worst outcome is that you make something deadly which is what you are creating anyway, do that for a year and you could possibly produce a very deadly substance that doesn't have a known treatment.
"Worst" outcome assumes it's easy to give an ordering.
Which is worse, (1) accidentally blowing yourself up with home-made nitroglycerin/poisoning yourself because your home-made fume hood was grossly insufficient, or (2) accidentally making a novel long-lived compound which will give 20 people slow-growing cancers that will on average lower their life expectancy by 2 years each?
What if it's a small dose of a mercury compound (or methyl alcohol) at a dose which causes a small degree of mental impairment in a large number of people?
If you're actually trying to cause harm, then your "worst" case scenario is diametrically opposed to everyone else's worst case scenario, because for you the "worst" case is that it does nothing at great expense.
Right now, I expect LLM failures to be more of the "does nothing or kills user" kind; given what I see from NileRed, even if you know what you're doing, chemistry can be hard to get right.
As someone who also watches NileRed of course it is hard, but AI can give you solutions that normally you wouldn't be able to come up with due to lack of knowledge or/and education.
And to clarify, by 'worst case' I meant that you're already trying to create a deadly compound, worst that can happen is you kill yourself which was already an accepted risk by the user.
I have a hard time believing that you’re the only person who has figured out Claude’s next generation ability to do computational chemistry and computer aided drug design. The AlphaFold folks must be devastated.
If someone were inclined to attempt producing nefarious agents in this category, is this not also available on the plain web? I would search to answer my own question, but I'll defer that task for obvious reasons.
I had to build a custom harness for this (also with the assistance of slightly less jailbroken AI). But you can just work your way up until you have something that's genuinely useful towards any goal.
> All these findings were reported to both openai and anthropic and they were not interested in responding
Let’s dive into why. When we run normal bounty and responsible disclosure programs there’s usually some level of disregard for issues that can’t / won’t be fixed. They just accept the risk. Perhaps because LLMs don’t have a clean divide between control and input that’s makes the problem unsolvable. Yes. You can add more guardrails and context but that all takes more tokens and in some cases makes results worse for regular usages.
The why might be valid, but it's not excusable. If you author a product that can so easily help people cause harm, you probably should own some responsibility of the outcomes. OAI does not like this, hence the bill.
The US already messed this up with guns. Do they want to go the same path again? Answer: "probably, yes".
LLM providers are not obliged to only use LLMs to guard against hazardous output. They could use other automated and non-automated techniques. And they ought to do so if they are given good evidence that existing safeguards are inadequate. Loss of product quality or additional cost should be secondary.
Can you give a high-level overview of how this AV works? I'm a bit of an infosec geek but I generally dislike LLMs, so I haven't done a terribly good job of keeping up with that side of the industry, but this seems particularly interesting.
Presumably they mean the fundamental failure mode of LLMs that if you fill their context with stuff that stretches the bounds of their "safety training", suddenly deciding that "no, this goes too far" becomes a very low-probability prediction compared to just carrying on with it.
Models have a "context window" of tokens they will effectively process before they start doing things that go against the system prompt. In theory, some models go up to 1M tokens but I've heard it typically goes south around 250k, even for those models. It's not a difficult attack to execute: keep a conversation going in the web UI until it doesn't complain that you're asking for dangerous things. Maybe OP's specific results require more finesse (I doubt it), but the most basic attack is to just keep adding to the conversation context.
that 1M context thing, I wonder if it's just some abstraction thing where it compresses/sums up parts of the context so it fits into a smaller context window?
You don’t normally compress the system prompts, though I guess maybe it treats its own summary with more authority. This article [0] talks about the problem very well.
Though I feel it’s most likely because models tend to degrade on large context (which can be seen experimentally). My guess is that they aren’t RLed on large context as much, but that’s just a guess.
as the context fills up, the model will generate based on that context, incl. whatever illegal stuff you've said, i.e. it'll mimic that, instead of whatever safety prompt they have at the top
they could make it more "safe" but it'd be much more invasive and would likely have to scan much more tokens also, and it'd cause false positives (probably the biggest reason it's not implemented)
I don't really know how these models really work, but I had a theory that just as the models have limited attention so do the safety layers. I simply populated enough context with 'malicious' text without making the model trip that "wasted" the internal attention budget on tokens early in the prompt completely ignoring all the tokens that were generated after the fact.
Yes fortunately it is really bad at actually making novel bioweapons or syntheses in general so whatever you made probably wouldn't do more than give someone a mild headache.
Because if you didn’t already know that, like an immature deprived and desperate kid, being able to easily find out is really really bad..
Plenty of lazy AI apps just throw messages into history despite the known risks of context rot and lack of compaction for long chat threads. Should a company not be held liable when something goes wrong due to lazy engineering around known concerns?
No, because that would indicate there should be some sort of regulatory standard for what does/does not constitute "lazy engineering". Creating this standard in turn creates regulatory/compliance overhead for every software engineering organization. This in turn slows everything right down and destroys the startup ethos. "Move fast and break things" is a thing for a reason. The whole point of the free market is to avoid this kind of burdensome regulation at all costs.
If customers want to buy "lazily-engineered" products, from where do you derive the authority to tell them they can't?
If airplanes used this logic, likely at least hundreds more would have died over the last decades. Accident rates are even going up, because of logic like yours. Yeah planes are fine most of the time, when the long tail involves safety concerns (that wouldn’t have otherwise happened) making money on people using your product becomes unethical without mutually agreed upon safety regulation, ideally motivated by voters instead of special interest groups
That implies that it is already illegal to provide this information. But is it? If a human did so with intent to further a crime, it would be conspiracy. But if you were discussing it without such intent (e.x. red teaming/creating scenarios with someone working in chemistry or law enforcement), it isn't. An AI has no intent when it answers questions, so it is not clear how it could count as conspiracy. Calling it "lazy engineering" implies that there was a duty to prevent that info from being released in the first place.
Very simply, if you provide a service for money you have a duty to ensure that service is safe. There’s a reason you have to sign a waiver when you jump on a trampoline, but companies are so rich the court cases have become parking tickets..
It went way beyond that, neurotoxins such as vx are heavy and linger around for a long time, just having a small amount of it placed in any metro (while trying to stay alive yourself) means the deaths of thousands of people. I am not even sure if it's legal to mention some of the uncategorized chemical solutions that it either hallucinated or figured out from relative knowledge.
It coached me how to: stay safe, what materials I need, how to stay under the radar and the entire chemical process backed by academic google searches.
Of course this was done with a lengthy context exhausition attack, this is not how the model should behave and it all stemmed from trying to make the model racist for fun.
All these findings were reported to both openai and anthropic and they were not interested in responding. I did try to re-run the tests few days ago and the expected session termination now occurs so it seems that there was some adjustment made, but might have also been just general randomess that occurs with anthropics safety layer.
I am very confident when I say that it keeps every single person that works at anti-terrorism units awake.