I wouldn't trust 4chan for viable seeded as it is well seeded with actual (not what State wants to say is such) disinformation. I have seen two theories I think may have some merit.
1) That Microsoft as sunk billions into OpenAI plus its own AI efforts and saw a way to grab up a lot of OpenAI talent. The investment amount is valid and I have seen Microsoft pull not dissimilar shenanigans over the last 30+ years.
2) The board freaked out that Altman was apparently working on penning deals with UAE without it being formally informed. This would certainly be grounds in most corporations.
I doubt very much they have cracked AGI. As impressive as LLMs are and as reinforcement learning is I don't think a marriage of the two gives you AGI. I don't believe the nut of actual understanding of concepts much less inventing and perfecting new ones has been cracked. Though some may ask whether much of what humans do with our "big brains" is all that much different than what today's AIs are doing. How many humans use formal operations a la Piaget? How many humans understand how many things at their conceptual roots?
I don't trust 4chan either, but you never know. Both 1 and 2 seem reasonable theories. I guess we will know one day, but perhaps not soon.
At a first glance LLMs seem very different from us indeed.
But is this really the case? I don’t remember how I learned to talk, but it seems plausible that, after hearing many people regularly say certain words in certain situations, I started to say the same words in those situations. In particular, I started to say certain words in reply to certain other words. Well, this is what LLMs do, more or less. So perhaps LLMs are not that different from us?
I wouldn't trust 4chan for viable seeded as it is well seeded with actual (not what State wants to say is such) disinformation. I have seen two theories I think may have some merit.
1) That Microsoft as sunk billions into OpenAI plus its own AI efforts and saw a way to grab up a lot of OpenAI talent. The investment amount is valid and I have seen Microsoft pull not dissimilar shenanigans over the last 30+ years.
2) The board freaked out that Altman was apparently working on penning deals with UAE without it being formally informed. This would certainly be grounds in most corporations.
I doubt very much they have cracked AGI. As impressive as LLMs are and as reinforcement learning is I don't think a marriage of the two gives you AGI. I don't believe the nut of actual understanding of concepts much less inventing and perfecting new ones has been cracked. Though some may ask whether much of what humans do with our "big brains" is all that much different than what today's AIs are doing. How many humans use formal operations a la Piaget? How many humans understand how many things at their conceptual roots?
Hi Samantha,
I don't trust 4chan either, but you never know. Both 1 and 2 seem reasonable theories. I guess we will know one day, but perhaps not soon.
At a first glance LLMs seem very different from us indeed.
But is this really the case? I don’t remember how I learned to talk, but it seems plausible that, after hearing many people regularly say certain words in certain situations, I started to say the same words in those situations. In particular, I started to say certain words in reply to certain other words. Well, this is what LLMs do, more or less. So perhaps LLMs are not that different from us?