Nah. We’re right on the money with this one. AI is a nice tool to have available, but you AI nuts are the ones being voluntarily and gladly fed the whole “you’re a bazillion times more productive with our AI!!!!” marketing spiel.
It’s a nice tool, nothing more, nothing less. Anything else is marketing nonsense.
Software already exists that has been written by Claude. They absolutely are selling the means to write software, and the means to securing the insecure software. At least for the time being. In the future Mythos will probably just make it possible to prompt good software from the start.
Maybe because there’s no critical and widely used software written by LLMs so far? Which says a lot about LLMs are failing to even approach the level of capabilities you would expect from all the hype? The goal has always been, even before LLMs, to find something smarter than our smarter humans. So far the success at that is really minuscule. Humans are still the benchmark, all things considered. Now they’re saying LLMs are going to be better than our best vulnerability researchers in a few months (literally what an Anthropic researcher said in a conference). Ok, that might happen. But the funny part is that the LLMs will definitely be the ones writing most of these vulnerabilities. So, to hedge against LLMs you must use LLMs. And that is gonna cost you more.
So today, most of the vulnerabilities being found by these tools are in code written by humans. Your hypothesis is that down the road, most of the vulnerabilities will be in code written by LLMs.
What seems more probable is that the same advances that LLMs are shipping to find vulnerabilities will end up baked into developer tooling. So you'll be writing code and using an LLM that knows how to write secure code.
No need to be petty. They have a point. We did this with the words racist and fascist. Overinclusion diluted the term and gave cover for the actual baddies to come in. I'm not sure debating who is and isn't a sociopath is as useful as, say, the degree to which Sam is a liar (versus visible).
While I agree that the word has been misused by some bad actors in the "Woke 1.0 era", it's worth pointing out that this isn't what most people complaining about the word being "diluted" are referring to as these are mostly people flat-out upset by any suggestion that they themselves might hold racist beliefs.
That said, anyone using "racist" as a noun isn't worth your time, nor is anyone who's genuinely upset about people calling concepts, systems or ideologies "racist".
Specifically, the "Woke 1.0 era" culture war arose from two conflicting meanings of the word "racist" largely aligning with two different segments of the population: 1) "racist" as a bad word you call people who are extremely bigoted against people along racial lines and 2) "racist" as a descriptor for systems and ideologies downstream from racialization (i.e. labelling people as racialized - e.g. Black - or non-racialized - i.e. "white") as a mechanism of asserting a power structure. "Wokists" would often conflate the two by applying the word as broadly as the latter definition necessitates while still attempting to use it with the emotional weight and personal judgement of the former definition.
I think a lot of this can be blamed on "pop anti-racism" just as a lot of the earlier "boys are icky" nonsense can be blamed on pop feminism because fully adopting the latter definition requires a critique of systems, which is much more dangerous to anyone benefiting from those systems than merely naming and shaming individuals. Anti-racism (and feminism) ultimately necessitates challenging hierarchical power structures in general and thus necessarily leads to anti-capitalism (which isn't to say all anti-capitalists are anti-racist and feminist - there are plenty of "anti-capitalist" movements that still suffer from racism and sexism just as there are "anti-racists" who hold sexist views or "feminists" who hold racist views). But you can't use that to sell DEI seminars to corporations and corporations can't use that to promote themselves as "woke" - as some companies like Basecamp found out when their internal DEI groups suddenly started taking themselves seriously during the BLM protests, resulting in layoffs and "no politics" policies and a general rightwards shift among corporate America leading up to and into the second Trump presidency (which reinforced this shift, resulting in the current state of most US corporations and their subsidiaries having significantly cut down on their previously omnipresent shallow "virtue signalling").
I don't know how to define the delineation I'm about to propose. But there is a difference between overinclusivity trashing a morally-loaded, potentially even technical, term, and slang evolving.
I would be curious to hear you expand on that, walk me through it, maybe a small paragraph to explain what over inclusion happened with the weird fascist, what baddies you're vaguely referring to, and connect those dots?
Racism and fascism have been used correctly, its just that people do not like to be have their beliefs associated with negative things and thus, rather than perform self-reflection about themselves, instead the problem exists elsewhere. I am sure you can come up with outliers that prove what you are saying is true, but across the vast majority of applications of the use of both words they are correct relative to definitions of both words.
For me the hallucination and gaslighting is like taking a step back in time a couple of years. It even fails the “r’s in strawberry” question. How nostalgic.
It’s very impressive that this can run locally. And I hope we will continue to be able to run couple-year-old-equivalent models locally going forward.
I haven't seen anybody else post it in this thread, but this is running on 8GB of RAM. It's not the full Gemma 4 32B model. It's a completely different thing from the full Gemma 4 experience if you were running the flagship model, almost to the point of being misleading.
It's their E2B and E4B variants (so 2B and 4B but also quantized)
The relevant constraint when running on a phone is power, not really RAM footprint. Running the tiny E2B/E4B models makes sense, this is essentially what they're designed for.
Depends on the phone, I have trouble fitting models into memory on my iPhone 13 before iOS kills the app. I imagine newer phones with more RAM don’t have this issue especially with some new flagship phones having 16+ GB of memory
Between the GPU, NPU and big.LITTLE cores, many phones have no fewer than 4 different power profiles they can run inference at. It's about as solved as it will get without an architectural overhaul.
Man I’m tired of people only caring about money when it comes to space. Meanwhile they lose their shit when someone suggests that tax payers shouldn’t pay for people’s Coca-Cola.
Obviously it isn't, but also obviously: this isn't a web browser in anything but technical implementation. It's a packaged, sold, interface to a proprietary service with a set of T&Cs that they are free to enforce.
Also every single one of these that I've seen before has fallen down in the same way. Chat apps that embed Facebook, third party YouTube viewer for Apple's VR headset, various other third party Instagram apps, etc.
I can't tell if this is a good faith question, but in the interests of good discussion, there are many ways they can do this. Technical solutions include blocking the user agent, blocking request patterns, client-side feature detection, client-side attestation, but importantly they are not limited to technical solutions, there are also things like cease and desist letters, breaches of contracts, pressure on the software distributors, lawsuits.
This is no judgement of whether these are the steps they might take, or whether they would be right in doing so, I want to remain neutral on this. But I would point again to the many instances of things like this happening in the past.
Detect usage patterns of normal users vs these, and then block access. Ultimately comes down to the companies' ability to throw however many devs at thwarting this one as makes sense for them.
Just as an example I remember, Facebook sponsored posts would be labeled, but if you dug into the HTML, what you'd get was random permutations or junk added to the label, like SSpoSnoSsorReD or something, and they'd use complicated overlays or other things to get the label to be visible. So you wouldn't just be able to use a simple easy rule.
Like most things.. it is a cat and mouse game dependent on how heavily they believe their revenue could be impacted. I am not sure why you think either of those corporates would have a problem of banning individual users, who are only suspected based on the app signature..
reply