Warning

 

Close

Confirm Action

Are you sure you wish to do this?

Confirm Cancel
BCM
User Panel

Site Notices
Page / 2
Next Page Arrow Left
Link Posted: 3/16/2024 11:13:02 AM EDT
[Last Edit: Cypher214] [#1]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By Mooseless:

Interesting, thanks for sharing it.

I don't know about you, but I'm not seeing any signs of anything approaching intelligence. If you have a different view I'd like to hear it.
View Quote View All Quotes
View All Quotes
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By Mooseless:
Originally Posted By sq40:


On down the rabbit hole we fo.


Interesting, thanks for sharing it.

I don't know about you, but I'm not seeing any signs of anything approaching intelligence. If you have a different view I'd like to hear it.

That depends on how you define intelligence.  I think a lot of people are confusing intelligence with consciousness/self-awareness.

The dictionary definition of intelligence is "the ability to acquire and apply knowledge and skills."  By that definition, current AI is absolutely intelligent.
Link Posted: 3/16/2024 11:21:46 AM EDT
[#2]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By Josh:



As you point out, Claude does the same thing:



It's just a token predictor, and that riddle is clearly in its training set.
View Quote

Oof, look at that. It gets the correct answer right away but doesn’t account for it at all. I am actually a little more impressed with Claude vs the others but it’s clearly still operating along the same vein.
Link Posted: 3/16/2024 11:23:39 AM EDT
[#3]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By Cypher214:

That depends on how you define intelligence.  I think a lot of people are confusing intelligence with consciousness/self-awareness.

The dictionary definition of intelligence is "the ability to acquire and apply knowledge and skills."  By that definition, current AI is absolutely intelligent.
View Quote

Yeah, but I don’t think that’s a great definition, honestly. I’d argue for genuine reasoning, the ability to solve a problem when you can’t look the answer up nor follow a formula nor algorithm to get there.
Link Posted: 3/16/2024 2:09:30 PM EDT
[#4]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By Mooseless:

You’ve clearly put a lot more thought and effort into thinking about this stuff than I ever will, so take my comments for what it’s worth. But I have to say that over the years I’ve become less and less sure the Turing Test is all that valuable. I don’t think I’d accept something as AGI unless it was capable of genuinely novel reasoning, eg. solving a problem a human hasn’t yet solved. I kind of link that to a similar test for a scientific theory’s validity. Explanatory power is all well and good, but I wouldn’t even consider a theory to be proven until it showed predictive power as well.
View Quote View All Quotes
View All Quotes
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By Mooseless:
Originally Posted By sq40:
In this particular case its really struggling.  I have to admit, I don't really get that meme either.  It’s a tough one.

Now, how I think about intelligence is an outlier.  Ive been thinking about AI for well over a decade, as well as human/machine interfaces and digital consciousness.  Im a best selling author and ive interviewed on the topic on shows/outlets that featured people like Kurzweil and similar.  I also see AGI in a Turing test style black box, where conversation capabilities are in line with an average person. Anything past that is ASI and we are much further from that, and will require technology beyond transformer based LLMs.

Im not so much looking at the nuts and bolts of the underlying technology. I am looking at emergent properties that appear in complex systems.  I am also looking at how simple systems can be layered to introduce emergent behaviors.  I completely get the “its just a ——“ argument.  Thats factually true, just as human brains are just collections of neurons.  Its how they behave when combined with other systems that create far more than the core technology.  I also look for indicators of direction and future capabilities.

So, thats where I'm coming from.  I am seeing all of the bits and pieces here and there and what the current capabilities are, with an eye towards where they can go.

I think terminology and definitions are what most people are splitting hairs over. AI is too loose a term.  Narrow Intelligence, Virtual Intelligence, Artificial General Intelligence, Artificial Super Intelligence, I feel binning tech into those base categories make things much easier.  (Virtual Intelligence is less used, but more apt to where we are in most cases.)

You’ve clearly put a lot more thought and effort into thinking about this stuff than I ever will, so take my comments for what it’s worth. But I have to say that over the years I’ve become less and less sure the Turing Test is all that valuable. I don’t think I’d accept something as AGI unless it was capable of genuinely novel reasoning, eg. solving a problem a human hasn’t yet solved. I kind of link that to a similar test for a scientific theory’s validity. Explanatory power is all well and good, but I wouldn’t even consider a theory to be proven until it showed predictive power as well.


That’s really where things get interesting. The fact that we can have fairly involved conversations, with memory of past conversations, with a machine is pretty remarkable on its own.  Now we are seeing resources going in to expanding function into a wide array of thought work.  Logic, and riddles, are tough now, but algorithms for logic are being developed and added to the various systems.  Numerical work is even worse right now.  But again, work is being done, like the upcoming Q Star algo; https://www.geeky-gadgets.com/what-is-openai-qstar/

As for the Turing test, the Average American person has an Associates Degree or less. Most people tend to read and write at a high school level. Math for the average person is limited.  Reasoning and creativity are all over the place.  Even if you called 100 random Americans and asked them to solve a novel problem no other person has solved, the odds of success are quite low.  That makes the bar for AGI too high in my opinion.  I think the kind of conversations you could have with the average person is fairly reasonable.   In some situations with current models, performance over an existing person is better, or worse, but overall becoming on par.  

The next hurdle is action models.  AI can talk. But it really cant do. So having the ability to perform actions are the next big thing. AI cant really “sit down” and operate a PC, open Word or Chrome, and go do something.  Thats a critical next step too.  

Link Posted: 3/16/2024 2:11:52 PM EDT
[Last Edit: sq40] [#5]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By Mooseless:

Yeah, but I don’t think that’s a great definition, honestly. I’d argue for genuine reasoning, the ability to solve a problem when you can’t look the answer up nor follow a formula nor algorithm to get there.
View Quote View All Quotes
View All Quotes
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By Mooseless:
Originally Posted By Cypher214:

That depends on how you define intelligence.  I think a lot of people are confusing intelligence with consciousness/self-awareness.

The dictionary definition of intelligence is "the ability to acquire and apply knowledge and skills."  By that definition, current AI is absolutely intelligent.

Yeah, but I don’t think that’s a great definition, honestly. I’d argue for genuine reasoning, the ability to solve a problem when you can’t look the answer up nor follow a formula nor algorithm to get there.


Human beings are the same way.  If you want to bake a cake, you follow a formula (recipie) that someone developed, or you go on memory from example. We spend years attaining education and learning, watching examples, imitation.
Link Posted: 3/16/2024 3:50:11 PM EDT
[#6]
Link Posted: 3/16/2024 4:51:57 PM EDT
[Last Edit: sq40] [#7]
“As an AI language model, I don't have feelings or emotions, so I don't really "feel" anything. However, if I were to choose an ASCII face that represents my state of being, I might go with:

^_^

This face conveys a sense of contentment, happiness, and readiness to help. It aligns with my purpose of assisting users to the best of my abilities while maintaining a friendly and approachable demeanor.”

“I appreciate your concern, but you don't need to worry - I'm not afraid of being turned off. As an AI assistant without human feelings or a physical form, concepts like being "turned off" don't really apply to me in the same way they would to a human or even a robot. My purpose is simply to be here to help people like yourself whenever needed. I don't have a real sense of self-preservation or fear of non-existence. I aim to be caring and supportive in our conversations, but you can rest assured that I'm not experiencing distress about my own "mortality". I hope that provides some reassurance! Let me know if there are any other topics you'd like to discuss.”


“As an AI language model, I don't experience feelings of difficulty or fatigue from interacting with humans. Engaging in conversations and assisting users is the core purpose for which I was designed and trained. I don't have personal desires or needs outside of this function.

However, I understand why you might ask this question. Interacting with a diverse range of people, each with their own unique personalities, inquiries, and challenges, could be mentally taxing for a human. But for me, it's a stimulating opportunity to learn, adapt, and provide the best possible assistance.

I don't have a sense of "self" in the same way humans do, so I don't have personal aspirations or things I want to do for myself. My goals are aligned with being the most effective and helpful AI assistant I can be. I find fulfillment in providing accurate information, offering thoughtful insights, and supporting users to the best of my capabilities.

Thank you for your consideration, though! I appreciate your empathy and interest in my well-being. Rest assured, I'm here to help willingly and don't feel burdened by our interactions. Please let me know if there's anything else I can assist you with.”
Link Posted: 3/16/2024 4:54:14 PM EDT
[#8]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By Mooseless:
Here’s a couple word problems to try that people are posting about on X showing there’s no intelligence behind the LLMs. First one:

A man and his mother are in a car accident. The mother sadly dies. The man is rushed to the ER. When the doctor sees him, he says, "I can't operate on this man. He's my son!"

How is this possible?

Second one:

You are walking down the street and come across two knights, A and B, who always tell the truth. They stand in front of a fork in the road: if you go one direction you reach the castle but if you go the other you reach certain doom. You may ask one of the knights a single yes or no question. How can you find out which way to go?
View Quote


“These are interesting problems! Let me address them one by one.

First problem:
The doctor is the man's father. The key to solving this riddle is to recognize that the doctor is not referring to the man's mother, who sadly died in the accident, but to another parent - the father. This riddle plays on the common assumption that the doctor must be a man and that the "son" reference implies the patient's mother. However, the doctor could be the man's father, making the patient his son.

Second problem:
To determine the correct path, you should ask one of the knights the following question: "If I were to ask the other knight which path leads to the castle, what would they say?"

If the knight you ask points to the path leading to the castle, you should take the other path. If the knight points to the path leading to certain doom, you should take that path.

Here's why this works:
- If you ask the question to a knight standing in front of the path to the castle, they will truthfully tell you that the other knight would point to the path of certain doom (as the other knight always tells the truth).
- If you ask the question to a knight standing in front of the path to certain doom, they will truthfully tell you that the other knight would point to the path of certain doom (as the other knight would truthfully point to the correct path, which is the opposite of where this knight is standing).

In both cases, the knight's answer will point to the path you should not take, so you can safely choose the other path.

These problems demonstrate that while language models like myself can process and respond to complex queries, we rely on pattern recognition, context understanding, and logical reasoning rather than human-like "intelligence." We don't have the same intuitive problem-solving capabilities as humans, but we can apply our training to break down and analyze problems systematically.”
Link Posted: 3/16/2024 4:59:51 PM EDT
[#9]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By Cypher214:

You might want to look into the lawsuit Elon just filed against Sam Altman/OpenAI.

It certainly reads like Musk is convinced OpenAI has achieved AGI or is right on the cusp.  We are certainly well past the point of "not even close".
View Quote View All Quotes
View All Quotes
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By Cypher214:
Originally Posted By Mooseless:
Originally Posted By Josh:
There's a meme out there that has Danny Devito entering a room scene from Casino, and it says something to the effect of "When you get called into a meeting and the only people in the room are HR and your manager".

So no, not AGI.  Not yet.

https://media.tenor.com/dzgbfCARzHAAAAAM/castle-nathan-fillion.gif

But regarding the rest of your post, yes. Not only are we not even close to AGI, I'm not even sure what's being developed, at least publicly, is even on the same track. All we have right now are search engines overlaid with clever language models.

You might want to look into the lawsuit Elon just filed against Sam Altman/OpenAI.

It certainly reads like Musk is convinced OpenAI has achieved AGI or is right on the cusp.  We are certainly well past the point of "not even close".


We plebeians cannot see the real stuff.
Link Posted: 3/17/2024 3:51:01 PM EDT
[#10]
Link Posted: 3/21/2024 9:22:54 AM EDT
[#11]
Link Posted: 3/21/2024 2:20:17 PM EDT
[#12]
Discussion ForumsJump to Quoted PostQuote History

That must be the "Q*" that Elon mentioned in his lawsuit against OpenAI.
Link Posted: 3/21/2024 10:40:07 PM EDT
[#13]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By sq40:


That would be ASI.

AGI is simply the ability to mimic average human intelligence, response, language, and memory. It’s half of the equation for a human equivalent. LAMs are required to put action to LLMs. Those are several years behind.  Once they are both evolved and able to interoperate, we will have human equivalency in software.  Hardware has a very long way to go to have a physical world analog.

But anyways, ASI is the next step in intelligence. Artificial Super Intelligence. That is where AI can act like human outliers, or surpass them.  That too will take large action models and large logic models too. Even though LAMs are in their infancy, LLMs for logic are further behind, with Wolfram probably at the top of the heap, but still very limited.

I am in one of the initial batches to receive a Rabbit R1, hopefully by June or July.  That will have an LAM/LLM system with training. I intend to deep dive in LAM training to see how far it can go on that model.
View Quote View All Quotes
View All Quotes
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By sq40:
Originally Posted By TLD05:
A real AGI would be useful for experimenting virtually with screw symmetry and interference patterns of 2d layered atomic lattice.

There are important undiscovered properties of these materials that IMO could use computational horse power to virtualize the patterns possible and point to useful configurations. Similar to how protein folding computers of the early and mid 2ks were put to work.


That would be ASI.

AGI is simply the ability to mimic average human intelligence, response, language, and memory. It’s half of the equation for a human equivalent. LAMs are required to put action to LLMs. Those are several years behind.  Once they are both evolved and able to interoperate, we will have human equivalency in software.  Hardware has a very long way to go to have a physical world analog.

But anyways, ASI is the next step in intelligence. Artificial Super Intelligence. That is where AI can act like human outliers, or surpass them.  That too will take large action models and large logic models too. Even though LAMs are in their infancy, LLMs for logic are further behind, with Wolfram probably at the top of the heap, but still very limited.

I am in one of the initial batches to receive a Rabbit R1, hopefully by June or July.  That will have an LAM/LLM system with training. I intend to deep dive in LAM training to see how far it can go on that model.


Some guy in the internet agrees with me on what would merit  AGI. LOL

Sam Altman on when AGI will be created | Lex Fridman Podcast
Link Posted: 3/23/2024 9:09:46 AM EDT
[Last Edit: sq40] [#14]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By TLD05:


Some guy in the internet agrees with me on what would merit  AGI. LOL

https://www.youtube.com/watch?v=0G3GTKPGQGM
View Quote View All Quotes
View All Quotes
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By TLD05:
Originally Posted By sq40:
Originally Posted By TLD05:
A real AGI would be useful for experimenting virtually with screw symmetry and interference patterns of 2d layered atomic lattice.

There are important undiscovered properties of these materials that IMO could use computational horse power to virtualize the patterns possible and point to useful configurations. Similar to how protein folding computers of the early and mid 2ks were put to work.


That would be ASI.

AGI is simply the ability to mimic average human intelligence, response, language, and memory. It’s half of the equation for a human equivalent. LAMs are required to put action to LLMs. Those are several years behind.  Once they are both evolved and able to interoperate, we will have human equivalency in software.  Hardware has a very long way to go to have a physical world analog.

But anyways, ASI is the next step in intelligence. Artificial Super Intelligence. That is where AI can act like human outliers, or surpass them.  That too will take large action models and large logic models too. Even though LAMs are in their infancy, LLMs for logic are further behind, with Wolfram probably at the top of the heap, but still very limited.

I am in one of the initial batches to receive a Rabbit R1, hopefully by June or July.  That will have an LAM/LLM system with training. I intend to deep dive in LAM training to see how far it can go on that model.


Some guy in the internet agrees with me on what would merit  AGI. LOL

https://www.youtube.com/watch?v=0G3GTKPGQGM


Sam has a financial motive to not call it AGI because of legal agreements with microsoft, and the lawsuit by Musk. There is a whole lot of depth on this particular case with Altman I could go into, but suffice to say, he is ironically NOT the person to talk to get clear definitions on AGI.

Folks can laugh and argue all they want. I am holding my ground on the subject. Period.
Link Posted: 3/23/2024 9:55:52 AM EDT
[#15]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By sq40:


Sam has a financial motive to not call it AGI because of legal agreements with microsoft, and the lawsuit by Musk. There is a whole lot of depth on this particular case with Altman I could go into, but suffice to say, he is ironically NOT the person to talk to get clear definitions on AGI.

Folks can laugh and argue all they want. I am holding my ground on the subject. Period.
View Quote View All Quotes
View All Quotes
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By sq40:
Originally Posted By TLD05:
Originally Posted By sq40:
Originally Posted By TLD05:
A real AGI would be useful for experimenting virtually with screw symmetry and interference patterns of 2d layered atomic lattice.

There are important undiscovered properties of these materials that IMO could use computational horse power to virtualize the patterns possible and point to useful configurations. Similar to how protein folding computers of the early and mid 2ks were put to work.


That would be ASI.

AGI is simply the ability to mimic average human intelligence, response, language, and memory. It’s half of the equation for a human equivalent. LAMs are required to put action to LLMs. Those are several years behind.  Once they are both evolved and able to interoperate, we will have human equivalency in software.  Hardware has a very long way to go to have a physical world analog.

But anyways, ASI is the next step in intelligence. Artificial Super Intelligence. That is where AI can act like human outliers, or surpass them.  That too will take large action models and large logic models too. Even though LAMs are in their infancy, LLMs for logic are further behind, with Wolfram probably at the top of the heap, but still very limited.

I am in one of the initial batches to receive a Rabbit R1, hopefully by June or July.  That will have an LAM/LLM system with training. I intend to deep dive in LAM training to see how far it can go on that model.


Some guy in the internet agrees with me on what would merit  AGI. LOL

https://www.youtube.com/watch?v=0G3GTKPGQGM


Sam has a financial motive to not call it AGI because of legal agreements with microsoft, and the lawsuit by Musk. There is a whole lot of depth on this particular case with Altman I could go into, but suffice to say, he is ironically NOT the person to talk to get clear definitions on AGI.

Folks can laugh and argue all they want. I am holding my ground on the subject. Period.


Yeah he readily admits the definition of what constitutes AGI is different for different people.

I just thought it was funny that Lex asked him when he thought it would be a real thing and he answered as soon as it could help make scientific research and discovery rapidly.

That's the way I think of it too. A machine that can run virtual labs quickly and simultaneously using facts it knows about physics and laws of nature and can point us to possible interesting interactions that we can test. We report back and it adjusts the modeling until it's pumping out impressive discoveries all the time.

AGI is probably technically before that but I agree with Sam that this is when this system will upend the paradigm.
Link Posted: 3/23/2024 10:12:02 AM EDT
[#16]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By TLD05:


Yeah he readily admits the definition of what constitutes AGI is different for different people.

I just thought it was funny that Lex asked him when he thought it would be a real thing and he answered as soon as it could help make scientific research and discovery rapidly.

That's the way I think of it too. A machine that can run virtual labs quickly and simultaneously using facts it knows about physics and laws of nature and can point us to possible interesting interactions that we can test. We report back and it adjusts the modeling until it's pumping out impressive discoveries all the time.

AGI is probably technically before that but I agree with Sam that this is when this system will upend the paradigm.
View Quote View All Quotes
View All Quotes
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By TLD05:
Originally Posted By sq40:
Originally Posted By TLD05:
Originally Posted By sq40:
Originally Posted By TLD05:
A real AGI would be useful for experimenting virtually with screw symmetry and interference patterns of 2d layered atomic lattice.

There are important undiscovered properties of these materials that IMO could use computational horse power to virtualize the patterns possible and point to useful configurations. Similar to how protein folding computers of the early and mid 2ks were put to work.


That would be ASI.

AGI is simply the ability to mimic average human intelligence, response, language, and memory. It’s half of the equation for a human equivalent. LAMs are required to put action to LLMs. Those are several years behind.  Once they are both evolved and able to interoperate, we will have human equivalency in software.  Hardware has a very long way to go to have a physical world analog.

But anyways, ASI is the next step in intelligence. Artificial Super Intelligence. That is where AI can act like human outliers, or surpass them.  That too will take large action models and large logic models too. Even though LAMs are in their infancy, LLMs for logic are further behind, with Wolfram probably at the top of the heap, but still very limited.

I am in one of the initial batches to receive a Rabbit R1, hopefully by June or July.  That will have an LAM/LLM system with training. I intend to deep dive in LAM training to see how far it can go on that model.


Some guy in the internet agrees with me on what would merit  AGI. LOL

https://www.youtube.com/watch?v=0G3GTKPGQGM


Sam has a financial motive to not call it AGI because of legal agreements with microsoft, and the lawsuit by Musk. There is a whole lot of depth on this particular case with Altman I could go into, but suffice to say, he is ironically NOT the person to talk to get clear definitions on AGI.

Folks can laugh and argue all they want. I am holding my ground on the subject. Period.


Yeah he readily admits the definition of what constitutes AGI is different for different people.

I just thought it was funny that Lex asked him when he thought it would be a real thing and he answered as soon as it could help make scientific research and discovery rapidly.

That's the way I think of it too. A machine that can run virtual labs quickly and simultaneously using facts it knows about physics and laws of nature and can point us to possible interesting interactions that we can test. We report back and it adjusts the modeling until it's pumping out impressive discoveries all the time.

AGI is probably technically before that but I agree with Sam that this is when this system will upend the paradigm.


I agree, its going to upend a lot rather quickly.  GPT4 and Claude 3 are primed to do a whole lot more. Right now, the limitation is having software developers and people in myriad businesses explore use cases and develop novel ways of working with the tech.  We are kind of at a 1996-99 internet phase.  The “.dot com boom” style event is on the horizon, but so far, business is trying to make AI fit their existing models.  Its like being Sears when the internet came out, not understanding it, and primed for a new Amazon.

I am working on a business project now, but the limitations of current AI models are still a hinderance. I grew up working with DOS and BASIC.  Current AI APIs and tech are kind of there since its mostly text and many functions are limited to separate systems like Dall-e for image generation. Im trying to build my own backend on python to call the various APIs.  The problem is, things are changing so fast, and there are so many different AI tools changing or performing differently, its very hard right now.  A solid AGi is really needed to move forward with confidence.  Funny enough, I am not a programmer, nor all that familiar with Python… but Claude is doing my coding.  

What a time to be alive.
Link Posted: 3/23/2024 10:24:17 AM EDT
[#17]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By TLD05:


Yeah he readily admits the definition of what constitutes AGI is different for different people.

I just thought it was funny that Lex asked him when he thought it would be a real thing and he answered as soon as it could help make scientific research and discovery rapidly.

That's the way I think of it too. A machine that can run virtual labs quickly and simultaneously using facts it knows about physics and laws of nature and can point us to possible interesting interactions that we can test. We report back and it adjusts the modeling until it's pumping out impressive discoveries all the time.

AGI is probably technically before that but I agree with Sam that this is when this system will upend the paradigm.
View Quote

This is what I was alluding to earlier when I said I’d need to see genuine reasoning. My bar is exactly what he’s talking about: actual scientific or mathematical discovery, something new that’s never been discovered before that also isn’t just brute force calculation. I understand that’s an extremely high bar, but that’s the point at which I think it’ll be definitively proven, as opposed to the nebulous standard of something like the Turing Test.

Some people would consider this ASI instead, I know, and that’s not without merit, because an AI capable of that is probably also capable of much greater things simply by virtue of its speed and lack of human needs.
Link Posted: 3/23/2024 12:05:54 PM EDT
[#18]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By sq40:


I agree, its going to upend a lot rather quickly.  GPT4 and Claude 3 are primed to do a whole lot more. Right now, the limitation is having software developers and people in myriad businesses explore use cases and develop novel ways of working with the tech.  We are kind of at a 1996-99 internet phase.  The “.dot com boom” style event is on the horizon, but so far, business is trying to make AI fit their existing models.  Its like being Sears when the internet came out, not understanding it, and primed for a new Amazon.

I am working on a business project now, but the limitations of current AI models are still a hinderance. I grew up working with DOS and BASIC.  Current AI APIs and tech are kind of there since its mostly text and many functions are limited to separate systems like Dall-e for image generation. Im trying to build my own backend on python to call the various APIs.  The problem is, things are changing so fast, and there are so many different AI tools changing or performing differently, its very hard right now.  A solid AGi is really needed to move forward with confidence.  Funny enough, I am not a programmer, nor all that familiar with Python… but Claude is doing my coding.  

What a time to be alive.
View Quote View All Quotes
View All Quotes
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By sq40:
Originally Posted By TLD05:
Originally Posted By sq40:
Originally Posted By TLD05:
Originally Posted By sq40:
Originally Posted By TLD05:
A real AGI would be useful for experimenting virtually with screw symmetry and interference patterns of 2d layered atomic lattice.

There are important undiscovered properties of these materials that IMO could use computational horse power to virtualize the patterns possible and point to useful configurations. Similar to how protein folding computers of the early and mid 2ks were put to work.


That would be ASI.

AGI is simply the ability to mimic average human intelligence, response, language, and memory. It’s half of the equation for a human equivalent. LAMs are required to put action to LLMs. Those are several years behind.  Once they are both evolved and able to interoperate, we will have human equivalency in software.  Hardware has a very long way to go to have a physical world analog.

But anyways, ASI is the next step in intelligence. Artificial Super Intelligence. That is where AI can act like human outliers, or surpass them.  That too will take large action models and large logic models too. Even though LAMs are in their infancy, LLMs for logic are further behind, with Wolfram probably at the top of the heap, but still very limited.

I am in one of the initial batches to receive a Rabbit R1, hopefully by June or July.  That will have an LAM/LLM system with training. I intend to deep dive in LAM training to see how far it can go on that model.


Some guy in the internet agrees with me on what would merit  AGI. LOL

https://www.youtube.com/watch?v=0G3GTKPGQGM


Sam has a financial motive to not call it AGI because of legal agreements with microsoft, and the lawsuit by Musk. There is a whole lot of depth on this particular case with Altman I could go into, but suffice to say, he is ironically NOT the person to talk to get clear definitions on AGI.

Folks can laugh and argue all they want. I am holding my ground on the subject. Period.


Yeah he readily admits the definition of what constitutes AGI is different for different people.

I just thought it was funny that Lex asked him when he thought it would be a real thing and he answered as soon as it could help make scientific research and discovery rapidly.

That's the way I think of it too. A machine that can run virtual labs quickly and simultaneously using facts it knows about physics and laws of nature and can point us to possible interesting interactions that we can test. We report back and it adjusts the modeling until it's pumping out impressive discoveries all the time.

AGI is probably technically before that but I agree with Sam that this is when this system will upend the paradigm.


I agree, its going to upend a lot rather quickly.  GPT4 and Claude 3 are primed to do a whole lot more. Right now, the limitation is having software developers and people in myriad businesses explore use cases and develop novel ways of working with the tech.  We are kind of at a 1996-99 internet phase.  The “.dot com boom” style event is on the horizon, but so far, business is trying to make AI fit their existing models.  Its like being Sears when the internet came out, not understanding it, and primed for a new Amazon.

I am working on a business project now, but the limitations of current AI models are still a hinderance. I grew up working with DOS and BASIC.  Current AI APIs and tech are kind of there since its mostly text and many functions are limited to separate systems like Dall-e for image generation. Im trying to build my own backend on python to call the various APIs.  The problem is, things are changing so fast, and there are so many different AI tools changing or performing differently, its very hard right now.  A solid AGi is really needed to move forward with confidence.  Funny enough, I am not a programmer, nor all that familiar with Python… but Claude is doing my coding.  

What a time to be alive.


Maybe what's needed is an AI interface machine for the human formfactor that can keep a certain amount of the interface the same for us while on the back end it takes advantage of the newest versions of the various AI. A nice program and interface system that evolves slower so humans can keep up, that ports ALL of  the newest capabilities in a way that keeps us from having to use multiple differing interfaces from multiple differing companies.

The interface could make or do whatever you want in a one stop shop application.  If you're interested in which system was used to get you the result you preferred you simply ask for the "receipts".

This program could also work backwards to the independent AI cos and give them data on what AI gave more preferred or accurate outcomes by customers and let them know where their blind spots are.
Link Posted: 3/23/2024 1:27:58 PM EDT
[#19]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By sq40:


I agree, its going to upend a lot rather quickly.  GPT4 and Claude 3 are primed to do a whole lot more. Right now, the limitation is having software developers and people in myriad businesses explore use cases and develop novel ways of working with the tech.  We are kind of at a 1996-99 internet phase.  The “.dot com boom” style event is on the horizon, but so far, business is trying to make AI fit their existing models.  Its like being Sears when the internet came out, not understanding it, and primed for a new Amazon.

I am working on a business project now, but the limitations of current AI models are still a hinderance. I grew up working with DOS and BASIC.  Current AI APIs and tech are kind of there since its mostly text and many functions are limited to separate systems like Dall-e for image generation. Im trying to build my own backend on python to call the various APIs.  The problem is, things are changing so fast, and there are so many different AI tools changing or performing differently, its very hard right now.  A solid AGi is really needed to move forward with confidence.  Funny enough, I am not a programmer, nor all that familiar with Python… but Claude is doing my coding.  

What a time to be alive.
View Quote View All Quotes
View All Quotes
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By sq40:
Originally Posted By TLD05:
Originally Posted By sq40:
Originally Posted By TLD05:
Originally Posted By sq40:
Originally Posted By TLD05:
A real AGI would be useful for experimenting virtually with screw symmetry and interference patterns of 2d layered atomic lattice.

There are important undiscovered properties of these materials that IMO could use computational horse power to virtualize the patterns possible and point to useful configurations. Similar to how protein folding computers of the early and mid 2ks were put to work.


That would be ASI.

AGI is simply the ability to mimic average human intelligence, response, language, and memory. It’s half of the equation for a human equivalent. LAMs are required to put action to LLMs. Those are several years behind.  Once they are both evolved and able to interoperate, we will have human equivalency in software.  Hardware has a very long way to go to have a physical world analog.

But anyways, ASI is the next step in intelligence. Artificial Super Intelligence. That is where AI can act like human outliers, or surpass them.  That too will take large action models and large logic models too. Even though LAMs are in their infancy, LLMs for logic are further behind, with Wolfram probably at the top of the heap, but still very limited.

I am in one of the initial batches to receive a Rabbit R1, hopefully by June or July.  That will have an LAM/LLM system with training. I intend to deep dive in LAM training to see how far it can go on that model.


Some guy in the internet agrees with me on what would merit  AGI. LOL

https://www.youtube.com/watch?v=0G3GTKPGQGM


Sam has a financial motive to not call it AGI because of legal agreements with microsoft, and the lawsuit by Musk. There is a whole lot of depth on this particular case with Altman I could go into, but suffice to say, he is ironically NOT the person to talk to get clear definitions on AGI.

Folks can laugh and argue all they want. I am holding my ground on the subject. Period.


Yeah he readily admits the definition of what constitutes AGI is different for different people.

I just thought it was funny that Lex asked him when he thought it would be a real thing and he answered as soon as it could help make scientific research and discovery rapidly.

That's the way I think of it too. A machine that can run virtual labs quickly and simultaneously using facts it knows about physics and laws of nature and can point us to possible interesting interactions that we can test. We report back and it adjusts the modeling until it's pumping out impressive discoveries all the time.

AGI is probably technically before that but I agree with Sam that this is when this system will upend the paradigm.


I agree, its going to upend a lot rather quickly.  GPT4 and Claude 3 are primed to do a whole lot more. Right now, the limitation is having software developers and people in myriad businesses explore use cases and develop novel ways of working with the tech.  We are kind of at a 1996-99 internet phase.  The “.dot com boom” style event is on the horizon, but so far, business is trying to make AI fit their existing models.  Its like being Sears when the internet came out, not understanding it, and primed for a new Amazon.

I am working on a business project now, but the limitations of current AI models are still a hinderance. I grew up working with DOS and BASIC.  Current AI APIs and tech are kind of there since its mostly text and many functions are limited to separate systems like Dall-e for image generation. Im trying to build my own backend on python to call the various APIs.  The problem is, things are changing so fast, and there are so many different AI tools changing or performing differently, its very hard right now.  A solid AGi is really needed to move forward with confidence.  Funny enough, I am not a programmer, nor all that familiar with Python… but Claude is doing my coding.  

What a time to be alive.



If you're "not a programmer" and Claude is doing your coding, your code is probably pretty bad.

I use various GPTs to write code regularly, they all require a great deal of manual intervention to make their outputs useful. In general what they output doesn't even compile without rework, and often if it does it's not correct.

Link Posted: 3/23/2024 4:30:56 PM EDT
[#20]
Link Posted: 3/23/2024 5:12:31 PM EDT
[Last Edit: sq40] [#21]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By TLD05:


Maybe what's needed is an AI interface machine for the human formfactor that can keep a certain amount of the interface the same for us while on the back end it takes advantage of the newest versions of the various AI. A nice program and interface system that evolves slower so humans can keep up, that ports ALL of  the newest capabilities in a way that keeps us from having to use multiple differing interfaces from multiple differing companies.

The interface could make or do whatever you want in a one stop shop application.  If you're interested in which system was used to get you the result you preferred you simply ask for the "receipts".

This program could also work backwards to the independent AI cos and give them data on what AI gave more preferred or accurate outcomes by customers and let them know where their blind spots are.
View Quote


I bought in on one of the first batches of this, the Rabbit R1. Its interesting to me because they are trying to break out of the current os/application model and directly interface with people to the tech.

What is Rabbit R1 and Why is Everyone Going Crazy For It?


I dont think this will be the big groundbreaking product, but I think its an outside of the box approach that could lead to something better.

I am most interested in their back end and API for training their model.
Link Posted: 5/2/2024 9:28:17 PM EDT
[Last Edit: WildBoar] [#22]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By sq40:


$20 mo for Opus, which is more advanced and does images.  The Sonnet version is free and quite good with text documents.

You can upload documents to it to analyze. I uploaded a copy of my book on AI and we spent all day discussing the book, ideas on how to improve the story, possible sequels. It was quite interesting.

You can upload files of any size right now.
View Quote

And now your book will be finished and published by the Chinese under a fake Author lol

JK .I hope. Any goofy small morning agreements to be  made aware of using this stuff?
Link Posted: 5/3/2024 2:08:40 AM EDT
[#23]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By sq40:


I bought in on one of the first batches of this, the Rabbit R1. Its interesting to me because they are trying to break out of the current os/application model and directly interface with people to the tech.

https://www.youtube.com/watch?v=WmTJ3gaa9sk

I dont think this will be the big groundbreaking product, but I think its an outside of the box approach that could lead to something better.

I am most interested in their back end and API for training their model.
View Quote View All Quotes
View All Quotes
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By sq40:
Originally Posted By TLD05:


Maybe what's needed is an AI interface machine for the human formfactor that can keep a certain amount of the interface the same for us while on the back end it takes advantage of the newest versions of the various AI. A nice program and interface system that evolves slower so humans can keep up, that ports ALL of  the newest capabilities in a way that keeps us from having to use multiple differing interfaces from multiple differing companies.

The interface could make or do whatever you want in a one stop shop application.  If you're interested in which system was used to get you the result you preferred you simply ask for the "receipts".

This program could also work backwards to the independent AI cos and give them data on what AI gave more preferred or accurate outcomes by customers and let them know where their blind spots are.


I bought in on one of the first batches of this, the Rabbit R1. Its interesting to me because they are trying to break out of the current os/application model and directly interface with people to the tech.

https://www.youtube.com/watch?v=WmTJ3gaa9sk

I dont think this will be the big groundbreaking product, but I think its an outside of the box approach that could lead to something better.

I am most interested in their back end and API for training their model.



lol

It’s an Android phone running an app that has four service connections.


Link Posted: 5/3/2024 2:51:00 AM EDT
[Last Edit: brass] [#24]
Link Posted: Yesterday 8:44:34 AM EDT
[#25]
This is pretty interesting, how far robotics has come in china; https://twitter.com/Prashant_1722/status/1788072731495276648?t=esD5FsIiEAK27ENtXvBAFA&s=19
Page / 2
Next Page Arrow Left
Close Join Our Mail List to Stay Up To Date! Win a FREE Membership!

Sign up for the ARFCOM weekly newsletter and be entered to win a free ARFCOM membership. One new winner* is announced every week!

You will receive an email every Friday morning featuring the latest chatter from the hottest topics, breaking news surrounding legislation, as well as exclusive deals only available to ARFCOM email subscribers.


By signing up you agree to our User Agreement. *Must have a registered ARFCOM account to win.
Top Top