Hmm, so the AI-bubble theme has now reached the mainstream.
It seems to me that the only thing we are now waiting for is for some serious earnings shortfalls in the AI sector, and then the house of cards can finally start to fall down.
@h4890>earning shortfall Hardware manufacturers can pay software manufacturers and vice versa, thus making infinite money, by doing it. And they likely aiming to be the censorship tool for zog, so they will do alright for a long while.
Learn to damage transformers. If you can get the oil to leak out of them, whatever capacity of behind them is permanently offline until they get repaired which could take weeks if not months. @amerika@Humpleupagus@h4890
It would never happen but ive heard the back stock on them is also pretty low so you get a significant number they would be fucked because they dont exist.
I had the windows and roof replaced, and whole house re-stucco'd, 10 years ago. Washer and dryer are a year old. New HVAC was installed this year. PG&E was still $850/mo during summer.
Additionally they never knew AI was going to exist so why do you need power? Its like burning excess kerosene in the bay because its useless then Diesel making an engine that runs it.
@Humpleupagus@Dudebro@amerika@h4890@pepsi_man@sapphireIdk if it even matters what anyone does to their home to make it more energy efficient. The power companies all raised their rates. Ours went from around $450 to well over $600 this Summer.
It gets better because we have no skilled labor force sizable enough to build and maintain this shit. Boiler operators dont fucking exist anymore they were all boomers who were in the navy in nam and are retiring this decade. Teachers told you to get a degree so you dont become blue collar and boilers are "archaic" tech that nobody ever got into.
@Dudebro@pepsi_man@amerika@Humpleupagus@h4890@sapphireAnd welders, pipe-fitters, steel workers, machinists, electricians and linemen... the list goes on. For a young person right now, any of these types of fields could turn out pretty well.
@Dudebro@amerika@Humpleupagus@h4890@pepsi_man@sapphireIt looks like the Navy even phased out (BT) Boiler Tech and rolled them into (MM) Machinist Mate. There might be some potential from that group if you could scrape for the competent young enlisted guys who are sick of the Nav and ready to bail!
@Dudebro@amerika@Humpleupagus@h4890@pepsi_man@sapphireYeah, sporty times coming up. Get out of cities now! Within two weeks of a complete blackout, every city will be recognizable from miles away by the black smoke plumes marking the grave site.
I'm rural and expect a different problem set. "Neighborhood Watch" will take on a whole new meaning as we shift into Rhodesia/South Africa mode.
The suburbs will be really intense battle zones. Get with retired combat arms neighbors now and do an area study! Preplan roadblocks and fighting positions. If you want something useful from the HOA, consider how to justify the expense of serious perimeter fencing!
@pepsi_man@Dudebro@Humpleupagus@amerika@h4890Oh! And read the substation distribution maps first so you only blackout the ghettoes and the jew neighborhoods. It's just harmless 'plinking'.
@amerika As always with bubbles. But the positive is, that like with most bubbles, some companies will survive. I am 100% confident that we'll get some positives out of the current LLM monkey AI:s, at worst, at least a very good and accepted idea about where it doesn't work. ;) At best, some niche area where they do work.
I think that true artificial _general_ intelligence is probably just 1 or 2 AI bubbles away, not counting the current one.
@amerika What I am philosophizing a lot about is whether this future general intelligence will hit a kind of "limit" or fundamental constant of intelligence, which means that it will never be more intelligent than us, or, like the transhumanists, there is no limit?
I think it would be enormously fascinating if we actually _did_ hit some kind of constant of how intelligent a single unit can be. Then the question becomes... why?
I do not know about limits, but I think there will be a point of maximum efficiency and intelligence will be limited there by practicality. My guess is that the sweet spot is lower than people think. Humans with 115 IQ points generally seem happy.
I thought a lot about this, let's say hypothetically that before the sun goes red giant, we get out to Europa and we also find a way to keep it from evaporating away during that phase. So now humanity is stuck under the ice of Europa. There's very little energy because the primary source of energy is the orbit around Jupiter, and there's really very little danger because there are no predators there's no weather there's no geological events.
Intelligence would be maladaptive at some point. It would make way more sense for any given life form to dramatically reduce the amount of energy it burns thinking of things because there's probably only about four things you need to do, and if you don't do those four things you're probably going to die, and there's nothing you can do that's actually going to be more positive than doing those four things.
Now the point here isn't to look into this deep dark future under the ice of Europa as a direct prediction. In this case, it is to look at the utility of intelligence and find out where the limits are. As the amount of energy available goes down and the number of options available goes down and the likelihood that an intelligent decision will be materially better than an unintelligent decision goes down then the utility of intelligence goes down.
Therefore there would have to be an upper limit to intelligence because there is a point where additional intelligence doesn't have the additional evolutionary utility to justify it.
I have a hilarious meme about the bell curve and throwing money at niggers for them to only get dumber to the surprise of whites. But I can only give you a brief description to make you lol.
@amerika Yes. Full automation, intellectual sparring partners. I think ending up in a situation where we create matrix level overlords is extremely unlikely. AI won't be created in a day, and I assume (!!) we are smart enough as a species not to invent our own doom, in the long term.
@amerika True. If there is a limit, it would be tightly coupled with the very nature of intelligence itself. What a fascinating thing if it turned out that it is such a delicate tool and phenomenon, that past a certain point it collapses into madness or full meditative self-absorption from which it won't wake up.
@sj_zero@amerika Sounds very plausible. I recently read The Martyrdom of Man by Winwood Reade (from around 1870) and he was a big fan of darwinism and how survival is what drives us, and how adversity has been responsible for a lot of our development.
Looking at intelligence as a response to environmental pressure is a nice take on it!
Maybe that will be the limit? Our global IT corps might be able to make their AI:s smarter, _but_, the cost of doing so, vs. what it will produce, will make it
@sj_zero@amerika prohibitively expensive at some point to improve it further.
I would also be very interested in how high IQ correlates with procreation? If I would guess, I'd say that the more intelligent you are, the less likely you are to have many children, except Elon Musk, perhaps. ;)
The Pearson correlations between national IQ scores and the three national fertility indicators were as follows; Total Fertility Rate (r = β 0.71, p < 0.01), Birth Rate (r = β 0.75, p < 0.01), and Population Growth Rate (r = β 0.52, p < 0.01).
One of Musk kids is a troon. Many others will follow?
@amerika@sj_zero Oh, and I read chapter 4. The first 3 I don't think are so interesting for anyone who has a basic understanding of science and history.
I also found it interesting that he does show very early transhumanist leanings! Earlier than what is recorded in wikipedia.