Latest Notable AI accomplishments

  • Thread starter gleem
  • Start date
  • Tags
    Ai
In summary, developments in AI and machine learning are progressing rapidly and are not limited to playing games, but also include pattern recognition and image analysis in various fields such as medicine, astronomy, and business. One challenge for AI is effective natural language processing, but recent advancements have shown promising results. With so many people working on AI and its potential to significantly change our society, it is likely that we will see more groundbreaking developments in the next few years. However, it is important to note that people tend to overestimate the timeline for AI's progress. AI has the potential to completely change the way we live and work, similar to how electricity has become an inseparable part of our society. It is expected to replace various jobs, which will require
  • #36
Another indirect contribution to AI implementation has been developed. Researchers in Zurich have found a way to store information in DNA molecules and placing them in nanobeads of ceramic materials. Possible applications are huge information storage densities and self replicating machines.

https://physicsworld.com/a/embedded-dna-used-to-reproduce-3d-printed-rabbit/ Rabbit is not real.
 
Technology news on Phys.org
  • #38
gleem said:
Rabbit is not real.
Thanks for the clarification! :oldbiggrin:
 
  • #39
Up until just recently a successful model for AI was the neural net based on a network and interconnectedness of neurons. This was because the neuron was identified as the leading component of information processing in the brain. The brain is composed mostly of two types of cells neuron and glia cells. The glia ( from the Latin for glue) cells, of which there are several, originally were believed to be support cells for the neurons doing maintenance and protective functions. Fairly recently their function was also deduced to include communicating with the neurons especially the astrocytes which have many dendrite type structures.

https://medicalxpress.com/news/2020-04-adult-astrocytes-key-memory.html

Developing a human level neural net system for even a dedicated task was challenging and hardware limited. A human brain has about 75 billion neurons with possibly 10,000 or more synapses each. You can see the problem with software models of a neural net using standard computer hardware. Even with neural net processors with 10 nanometer technology, there are still challenges, for example, brains being three dimensional. Astrocytes are at least as numerous as neurons.

https://techxplore.com/news/2020-07-astrocytes-behavior-robots-neuromorphic-chips.html

Now a group At Rutgers University has integrated some astrocyte functionality into a commercial neuromorphic chip from Intel to control the movement of a six legged robot.
"As we continue to increase our understanding of how astrocytes work in brain networks, we find new ways to harness the computational power of these non-neuronal cells in our neuromorphic models of brain intelligence, and make our in-house robots behave more like humans," Michmizos said. "Our lab is one of the few groups in the world that has a Loihi spiking neuromorphic chip, Intel's research chip that processes data by using neurons, just like our brain, and this has worked as a great facilitator for us. We have fascinating years ahead of us."

One final note, they used the term plastic a few times whose standard definition and as applied to AI refers to the ability to adapt.
 
  • Informative
Likes lomidrevo
  • #41

Drug-Discovery AI Designs 40,000 Potential Chemical Weapons In 6 Hours​

In a recent study published in the journal Nature Machine Intelligence, a team from pharmaceutical company Collaborations Pharmaceuticals, Inc. repurposed a drug discovery AI. It successfully identified 40,000 new potential chemical weapons in just 6 hours, with some remarkably similar to the most potent nerve agent ever created.
According to an interview with the Verge, the researchers were shocked by how remarkably easy it was.

“For me, the concern was just how easy it was to do. A lot of the things we used are out there for free. You can go and download a toxicity dataset from anywhere. If you have somebody who knows how to code in Python and has some machine learning capabilities, then in probably a good weekend of work, they could build something like this generative model driven by toxic datasets,” said Fabio Urbina, lead author of the paper, to the Verge.

“So that was the thing that got us really thinking about putting this paper out there; it was such a low barrier of entry for this type of misuse.”
https://www.iflscience.com/drugdisc...0-potential-chemical-weapons-in-6-hours-63017
 
  • Informative
Likes Oldman too
  • #43
I have mentioned before that useful applications will be accelerated by hardware development. Recently a company has developed the largest CPU chip 462 cm2. This eliminates the need to use hundreds of GPUs for the calculations and the need to mind the intricate interconnections resulting in expensive (and expensive) time to program the system. This chip will help accelerate AI research however it still requires a huge amount of power, 20KW.

Some researchers are giving AI access to other ways to interact with the outside world. They are giving AI the ability to learn about themselves that is to self-model.

Summary: Researchers have created a robot that is able to learn a model of its entire body from scratch, without any human assistance. In a new study, the researchers demonstrate how their robot created a kinematic model of itself, and then used its self-model to plan motion, reach goals, and avoid obstacles in a variety of situations. It even automatically recognized and then compensated for damage to its body.
https://www.sciencedaily.com/releases/2022/07/220713143941.htm
 
  • Like
Likes Oldman too
  • #44
While my previous post shows significant progress in reducing learning time using computers to model the human brain MIT researchers have taken another tack in creating a system in which the learning will take place within a memory structure more in line with the architecture of a biological brain. Current computer-generated neural networks emulating the conductivity of a neuron synapses by weighting a synapse through a computation. This involves using memory to store this weighting factor as well as the need to shuttle information into and out of the memory to a CPU for the weighting calculation as I understand it. This new approach uses what is known as a resistive memory in which the memory develops a conductance within not needing a CPU and the moving of data and the associated power requirement. This process then is really analog and not digital. The system uses a silicon-compatible inorganic substrate to build the artificial neurons which are1000 times smaller than biological neurons and promise to process information much faster with a power requirement much closer to a biological system. Additionally, the system is massively parallel reducing learning time further.

Earlier work published last year demonstrated the feasibility of a resistive memory. One year later significant progress has been made so that a working resistive NN can be built with silicon fabrication techniques..

Bottom line: smaller size, lower power, faster learning. Most predicted artificial general intelligence would be developed at the earliest 2050 probably closer to 2100 if at all. It is beginning to look like it might be earlier.

MIT's Quest for Intelligence Mission Statements
https://quest.mit.edu/research/missions
 
  • Informative
  • Like
Likes anorlunda and Oldman too
  • #45
A question that is often asked is when might we expect AGI. Well, there is some evidence that it might occur sooner than we think. Human language usage is one of the most difficult tasks for AI considering a large number of exceptions, nuances, and contexts. This makes the translation from one language to another challenging. The reason AGI might be reached a lot sooner than most AI experts suggest is a metric that determines the time for a human to correct a language translation generated by AI. It takes a human translator about one second per word to edit the translation of another human. In 2015 it took a human 3.5 seconds per word to edit a machine-generated translation. Today it takes 2 seconds. If the progress in accuracy continues at the same rate then machine translations will be as good as humans in 7 years.

https://www.msn.com/en-us/news/tech...A16FldN?cvid=2c1db71908854908b3a14b864e9c1eec

Some will find it difficult to accept an AGI and find numerous reasons to reject the idea. But we do not understand how we do what we do as well as what the machines are doing. The "proof" will probably be in some sort of intellectual contest perhaps a debate between an AGI and a human.

Go AGI.
 
  • #46
gleem said:
A question that is often asked is when might we expect AGI. Well, there is some evidence that it might occur sooner than we think. Human language usage is one of the most difficult tasks for AI considering a large number of exceptions, nuances, and contexts. This makes the translation from one language to another challenging. The reason AGI might be reached a lot sooner than most AI experts suggest is a metric that determines the time for a human to correct a language translation generated by AI. It takes a human translator about one second per word to edit the translation of another human. In 2015 it took a human 3.5 seconds per word to edit a machine-generated translation. Today it takes 2 seconds. If the progress in accuracy continues at the same rate then machine translations will be as good as humans in 7 years.

https://www.msn.com/en-us/news/tech...A16FldN?cvid=2c1db71908854908b3a14b864e9c1eec

Some will find it difficult to accept an AGI and find numerous reasons to reject the idea. But we do not understand how we do what we do as well as what the machines are doing. The "proof" will probably be in some sort of intellectual contest perhaps a debate between an AGI and a human.

Go AGI.
Choice of language matters. I have noted that Google translates Japanese poorly. This isn't surprising. The written language is extremely ambiguous, so much so that constructing sentences with dozens if not hundreds of possible meanings is a national pastime.
 
  • Like
Likes gleem
  • #47
gleem said:
If the progress in accuracy continues at the same rate then machine translations will be as good as humans in 7 years.
Machines have beaten humans in chess for 25 years. Neither of these tasks is AGI.

ChatGPT is a recent example: It produces text with great grammar that is full of factual errors - it knows grammar but has a very poor understanding of content.
 
  • Like
Likes Astronuc
  • #48
mfb said:
Machines have beaten humans in chess for 25 years. Neither of these tasks is AGI.

ChatGPT is a recent example: It produces text with great grammar that is full of factual errors - it knows grammar but has a very poor understanding of content
Deep Blue, IBM's computer that beat Kasparov was an 11.5GFlop machine and would be incapable of what Chat GPT can do. BTW use of language is considered an element of human intelligence. GPT is not capable of reflecting on its response like humans. If we misspeak we can correct ourselves. Keep in mind the use of the internet to train it with its tainted data is really a bad way to train anything or anybody. When humans are taught they generally are provided with vetted data. If we were taught garbage we would spew garbage and actually, some do anyway.
 
  • Like
Likes Astronuc
  • #49
Hornbein said:
Choice of language matters. I have noted that Google translates Japanese poorly. This isn't surprising. The written language is extremely ambiguous, so much so that constructing sentences with dozens if not hundreds of possible meanings is a national pastime.

From "The History of Computer Langauge Translation" https://smartbear.com/blog/the-history-of-computer-language-translation/
Human errors in translation can be and have been cataclysmic. In July 1945, during World War 2, the United States issued the Potsdam Declaration, demanding the surrender of Japan. Japanese Premier Kantaro Suzuki called a news conference and issued http://www.lackuna.com/2012/04/13/5-historically-legendary-translation-blunders/#f1j6G4IAprvcoGlw.99 That wasn't what got to Harry Truman. Suzuki used the word http://www.nsa.gov/public_info/_files/tech_journals/mokusatsu.pdf The problem is, “mokusatsu” can also mean “We’re ignoring it in contempt.” Less than two weeks later, the first atomic bomb was dropped.
 
  • Informative
Likes berkeman
  • #50
Japanese Premier Kantaro Suzuki called a news conference and issued http://www.lackuna.com/2012/04/13/5-historically-legendary-translation-blunders/#f1j6G4IAprvcoGlw.99
http://www.lackuna.com/2012/04/13/5-historically-legendary-translation-blunders/#f1j6G4IAprvcoGlw.99his link from NSA mentions it.
https://www.nsa.gov/portals/75/docu...ssified-documents/tech-journals/mokusatsu.pdf

But I think it is a stretch that the poor translation changed the outcome. We can't get into the heads of the participants, so we'll never know for sure.
 
  • Like
Likes gleem
  • #51
This is why the translation of languages is challenging for AI as well. and why equaling a human translator will be such an accomplishment.
 
  • #52
gleem said:
BTW use of language is considered an element of human intelligence.
So is playing chess.
ChatGPT is yet another specialized AI.
PS: So is playing chess, too.
 
  • #53
ChatGPT 4 will be released sometime this year as soon as this spring. There are rumors that it will be disruptive. There is an overview of what might be released including the possibility that it will be multimodal i.e., using text, speech, and images, although Open AI will not confirm this. A review of an interview with Sam Altman CEO of OpenAI can be found here. https://www.searchenginejournal.com/openai-gpt-4/476759/#close and the actual interview/podcast here https://greylock.wpengine.com/greymatter/sam-altman-ai-for-the-next-era/

One thing that Altman has brought up is that these agents as they are called often have surprising characteristics. He emphasizes that GPT4 will not be released until it is assured that it will be safe. Another interesting tidbit is that work is being done to try different approaches to NLP beyond GTP. Issues that he believes will arise in the future with AI are wealth distribution along with access to and governance of AI,
 
  • #54
mfb said:

A TON of caution is required here - as described in this science.org article.
There are several pit falls to using AI this way, but I think the one that holds the highest potential to blind side users of AI methods is this one:
At Mount Sinai, many of the infected patients were too sick to get out of bed, and so doctors used a portable chest x-ray machine. Portable x-ray images look very different from those created when a patient is standing up. Because of what it learned from Mount Sinai's x-rays, the algorithm began to associate a portable x-ray with illness. It also anticipated a high rate of pneumonia.
To put it simply, an AI assigned to evaluate x-rays for pneumonia will prefer "cheating" over meritorious evaluation whenever cheating yields better answers.
Ideally, the systems/software engineers would gain a sense about how the AI is making its determinations. But by AI standards, this would be quite counter-productive. The whole purpose of using the AI is to avoid that kind of analysis - to use an automated AI analysis in place of a detailed human examination of the many possible tip-offs to the disease severity.

In that case of the cancer patients, did the staff that made the decisions on who, when, where, and how x-rays were to be made base those decisions on their own evaluation of the patients prognosis? If so, is the AI picking up indications in the imagery (subtle or otherwise) of this staff knowledge?
 
  • #55
It seems that developments in AI especially in capability are becoming quicker. Hardware on the other hand still lags especially in power requirements. A new biological model based on the memristor may help address this issue.
A team of researchers at the University of Oxford, IBM Research Europe, and the University of Texas, have announced an important feat: the development of atomically thin artificial neurons created by stacking two-dimensional (2D) materials. The results have been published in Nature Nanotechnology.
https://techxplore.com/news/2023-05-artificial-neurons-mimic-complex-brain.html

This memristor works with both electricity and light.

Such a memory would be useful for autonomous robots with limited power resources.
 
  • #56
anorlunda said:
http://www.lackuna.com/2012/04/13/5...-translation-blunders/#f1j6G4IAprvcoGlw.99his link from NSA mentions it.
https://www.nsa.gov/portals/75/docu...ssified-documents/tech-journals/mokusatsu.pdf

But I think it is a stretch that the poor translation changed the outcome. We can't get into the heads of the participants, so we'll never know for sure.
The Japanese written language is very ambiguous. A kanji character can stand for a dozen completely different words. I've noticed that the Google translator has a difficult time with it.

It's possible to write a sentence that corresponds to hundreds of differing spoken sentences. Then you can throw puns into the mix. This is a national pastime.

In thge nation of Japan the study of written English is more or less required in high school. In Japanese popular music it is very common to include English phrases in the lyrics. When they do this the practice of punning doesn't go away. There is a popular song called My Lover Is A Stapler. This makes no sense until you find out that the Japanese word for a stapler is Hotchkiss, after the first brand of stapler to catch on in Japan. Then you make a pun. My lover has a hot kiss. A bilingual pun! Though a real Japanese speaker might expose me as full of beans. I do know that a band named Bandmaid has a album called Maid in Japan, no doubt about those puns.

maid in japan.jpg


Many people think they are the best hard rock band in the world today. They are embarrassed by this first album and have suppressed it.

I have read that the world's most ambiguous language is Beijingese, in which a single word can have 253 meanings. Or something like that.
 
Last edited:
  • Like
Likes Astronuc
  • #57
cosmik debris said:
When I started programming in 1969 I was told by a professor that a career in programming was unlikely because in 5 years the AIs would be doing it. It has been 5 years ever since.
That's better/faster than commercial nuclear fusion, which is always 10 years away for the last 5 or 6 decades. :-p
 
  • Like
Likes Tom.G and russ_watters
  • #58
I was logging into my work computer, which is Windows based, and found the following:

Hello, this is Bing! I’m the new AI-powered chat mode of Microsoft Bing that can help you quickly get summarized answers and creative inspiration .

  • Got a question? Ask me anything - short, long, or anything in between 🤗.
  • Dive deeper. Simply ask follow-up questions to refine your results .
  • Looking for things to do? I can help you plan trips ️ or find new things to do where you live.
  • Feeling stuck? Ask me to help with any project from meal planning to gift ideas .
  • Need Inspiration? I can help you create a story, poem, essay, song, or picture.
Try clicking on some of these ideas below to find out why the new Bing is a smarter way to search .

AI has succeeded in being obnoxious.
 
  • Like
Likes berkeman
  • #60
Astronuc said:
That's better/faster than commercial nuclear fusion, which is always 10 years away for the last 5 or 6 decades. :-p
If anything the closer we get to it the further it gets away.
 
  • Like
Likes Astronuc
  • #61
Researchers at Rice University in conjunction with Intel have developed a new algorithm for machine learning called a sub-linear deep learning engine (SLIDE ) which avoids matrix inversion and the need for GPU. In a comparison of this algorithm and an Intel 44 core Xenon CPU versus the standard, Google's TensorFlow SW using an Nvidia V100 GPU for a 100 million parameter test case the CPU was trained in less than one-third of the time it took the GPU.

https://www.unite.ai/researchers-cr...Us) without specialized acceleration hardware.
 
  • Informative
Likes Filip Larsen
  • #62
  • Like
Likes Greg Bernhardt

Similar threads

Back
Top