Hacker Newsnew | past | comments | ask | show | jobs | submit | JumpCrisscross's commentslogin

Disaster response is a lie researchers tell themselves when building military hardware. The purpose of such robots would be to e.g. burrow into the collapsed tunnels at Fordow and confirm the uranium is there. (Or, alternatively, burrow into military tunnels to identify targets.)

> the FBI got this man killed with a sloppy indictment

How do we know that’s how they discovered Garrison was cooperating?


Garrison was killed four days after the indictment was released. From the text:

> On September 18, 2020, the Justice Department unsealed a seven-count indictment charging Garrison with “staging over fifty accidents.” Alfortish and Motta weren’t indicted or named in the document, but they were described, respectively, as “Co-Conspirator A” and “Attorney B.” Garrison’s coöperation with the F.B.I. wasn’t referenced in the text—and it might have seemed that charging him in such a public fashion would be a good way to conceal his role as an informant. But a close reading of the filing encouraged certain inferences. One stray sentence asserted that “Co-Conspirator A instructed Garrison on the number of passengers to include in staged collisions.” Alfortish might have made some unconventional life choices, but he wasn’t a total idiot. He certainly hadn’t supplied that information to the Feds—and the only other person who could have done so was Garrison.

> Four days after the indictment was made public, Garrison had dinner with his mother, Sandra Fontenette, who was seventy-four, at the tidy condominium that she owned, on Foy Street. They ate gumbo and talked. Garrison had been texting with a woman named Kim that afternoon, and they had made plans to hang out after dinner. At around eight-thirty, the doorbell rang, and Garrison went to meet her. But, upon opening the front door, he shouted to his mother, “Get down!” Ten shots rang out, and Garrison collapsed on the floor, dead.


Oof. Thank you.

It’s implied in the article

> That's what you voted for, freedumb-loving right-wingers

The right is worse. But policing language has been going on in the far left for about a decade, too. There is an illiberal strain poisoning the population through social media.


> large majority of the scientific community is treating it and calling it an existential threat

I haven’t seen evidence of this. What I see is scientists making measured predictions about massive costs in human life, economies, refugee crises, and wars. Extinctions. Like, horrible stuff. But not extinction or even civilisational collapse.


So extinctions, but not extinction?

> extinctions, but not extinction?

Yes. Extinctions are horrible, but they aren’t an existential threat to us. Climate change simply isn’t an existential threat. That doesn’t mean it isn’t urgent. Like, the Bronze Age collapse and black plague and WWII weren’t existential, doesn’t mean they’re fine. But raising the stakes beyond what the science says like this undermines the credibility of the real warnings.


“…the F.A.A. determined that the risk would be minimal even if the laser came into contact with an airplane”

I’m curious to know more about the testing. Was it only done on airliners, or GA aircraft, too?


Is this an admission that the arrests and prosecutions of people with lasers were a farce?

Waterloo University sort of nails the balance with their focus on constant, paid internships.

Denmark is linked to the Norwegian grid, which is essentially all hydropower [1]. It imports baseload when needed and exports cheap solar power when not.

[1] https://en.wikipedia.org/wiki/Electricity_sector_in_Norway


> It is 100% an existential threat, but the existential bit happens in 100 years

No, it’s not, and no, we don’t know that. Humans will survive climate change. Rich countries will survive, too.

We will all suffer. Economically, healthwise and aesthetically. But that’s not existential. Framing it as such is disingenuous and counterproductive.


We will go from 8 billion humans to maybe 1 or 2 billion humans, but that is probably going to happen either way. Poor countries will be obliterated, rich countries are likely to see tanking living standards. Long term humans go extinct (or are superseded by some sort of singularity successor) and the earth recovers in a few thousand years as if we never existed.

RCP8.5 is pretty much ruled out by people as unlikely for some reason, even as we have the major super power on the planet pulling out of the Paris agreement on climate change.

There is clearly a temperature at which this planet will not support human life, and we could definitely get the planet to that temperature if we don't change course and reach net zero.

Saying its not an existential threat is just wild to me.


> There is clearly a temperature at which this planet will not support human life

Yes, but that temperature isn't going to be reached by fossil fuels.*

The reduced brain function from the extra CO2 (if we burned all of it) may make us unable to adapt to the higher temperature, however.

* Ironically, unbounded growth of PV to tile all Earth's deserts could also raise the planet's temperature by 4 K or so, and 6 K or so if tiling all non-farm land.

Deserts are huge, this by itself would represent an enormous increase in global electricity supply; but also, current growth trends for PV have been approximately exponential (in the actual maths sense not just "fast") for decades now, so this could happen in as little as 35 years give or take a few (both scenarios are within the same margin for error, because exponential is like that).


> There is clearly a temperature at which this planet will not support human life, and we could definitely get the planet to that temperature

There is such a temperature. We are not getting to it in half a century at current emission rates, even with zero curtailment. If you have a source that shows the opposite, I’d be happy to read it.


Of course not in half a century, but it's not like the earth just stops getting hotter after 2100 rolls around.

What about 2200? Humanity at 2300? It's the same planet with the same feedback loops after all.


> What about 2200? Humanity at 2300?

You literally said “the existential threat happens in 100 years.”

And to your questions, we don’t know. I’d love to see the data. I’m still sceptical we hit “existential” levels for human survival. That wouldn’t even happen if we went back to dinosaur levels of CO2.


But then you’d expect the trend to self correct in the long run. AI actually does seem to replacing customer-service and CS jobs effectively.

From what I've seen many efforts to replace roles such as customer service with AI are being rolled back or downscaled due to intolerably high error rates and general incapability. While these segments won't come out unscathed I don't think the actual impact will end up being as severe as feared.

I believe that too. Broadly, I’m agreeing with the parent comment—AI can’t be causing long-run layoffs and be worthless.

You're apparently assuming that AI related layoffs are rational, based on those making the decisions having good information about what their own organizations are achieving with AI.

I think this is far from the truth. In many companies AI has become a religion, not a new technology to be evaluated and judged. Employees are told to use AI, and report how much they are using, and all understand the consequences of giving the wrong answer. The CEO hears the tales of rampant AI use and productivity that he is demanding to hear, then pats himself on the back and initiates another layoff. Meanwhile in the trenches little if anything has actually changed.


> assuming that AI related layoffs are rational

Nope. I’m saying if firms lay off on the assumption of AI gains that never come, they’ll be beaten by firms who don’t.


OK, but your post reads as if you think that AI being the cause of layoffs can't be true if AI is "worthless" (less capable than they are assuming), which is false.

CEOs are laying off because of AI because they think it will save them money, but are doing so based on misinformation, largely due their own insistence that everyone uses AI, and report how much they are using - they are just hearing what they asked to hear (just like Mao hearing about impossible levels of rice production during the "Great Leap Forward"). I'm not making this up - I've seen it first hand.

You can see the proof of this - companies laying of because of what they mistakenly believe AI can do - in companies like Salesforce, forced to do an embarassing U-turn and hire people back when the reality sets in. At least Salesforce were quick to correct - most big companies are not so nimble or ready to admit their own mistakes.

We seem to have reached mania-like levels of rice-production reporting, with companies like Meta now taking AI token usage as a proxy for productivity and/or a measure of something positive, and apparently having a huge leaderboard displaying who is using the most (i.e. spending the most money!). The only guaranteed outcome of this is that they will indeed see massive use of tokens, and a massive AI bill, and then in a year or so will likely be left scratching their heads wondering why nothing much appears to have changed.


Might be true, but unfortunately, we need to pay for rent/mortgage/groceries in the short run.

> One man's eduction is another man's indoctrination

This is pretty silly. Any amount of new knowledge tends to make the brain more critical. The only real exception is rote memorization without application.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: