Some items related to LLM limits, uses and AI robots.
Summary: New research reveals that large language models (LLMs) like ChatGPT cannot learn independently or acquire new skills without explicit instructions, making them predictable and controllable. The study dispels fears of these models developing complex reasoning abilities, emphasizing that while LLMs can generate sophisticated language, they are unlikely to pose existential threats. However, the potential misuse of AI, such as generating fake news, still requires attention.
To create a novel or a painting, an artist makes choices that are fundamentally alien to artificial intelligence.
Real paintings bear the mark of an enormous number of decisions. By
comparison, a person using a text-to-image program like dall-e
enters a prompt such as "A knight in a suit of armor fights a
fire-breathing dragon," and lets the program do the rest. (The
newest version of dall-e accepts prompts of up to four thousand
characters-hundreds of words, but not enough to describe every
detail of a scene.) Most of the choices in the resulting image have
to be borrowed from similar paintings found online; the image might
be exquisitely rendered, but the person entering the prompt can't
claim credit for that.
...
What I'm saying is that art requires making choices at every scale;
the countless small-scale choices made during implementation are just
as important to the final product as the few large-scale choices made
during the conception. It is a mistake to equate "large-scale" with
"important" when it comes to the choices made when creating art;
the interrelationship between the large scale and the small scale
is where the artistry lies.
For a negative take on the above see X/Twitter Séb Krier Post.
How do we think about a fundamentally unknown and unknowable risk, when the experts agree only that they have no idea? Benedict Evans
Serious AI scientists who previously thought AGI was probably
decades away now suggest that it might be much closer. At the
extreme, the so-called 'doomers' argue there is a real risk
of AGI emerging spontaneously from current research and that this
could be a threat to humanity, and calling for urgent government
action. Some of this comes from self-interested companies seeking
barriers to competition ('this is very dangerous and we are
building it as fast as possible, but don't let anyone else do
it'), but plenty of it is sincere.
...
They don't know, either way, because we don't have a coherent theoretical
model of what general intelligence really is, nor why people seem to be
better at it than dogs, nor how exactly people or dogs are different to
crows or indeed octopuses. Equally, we don't know why LLMs seem to work
so well, and we don't know how much they can improve. We know, at a basic
and mechanical level, about neurons and tokens, but we don't know why
they work. We have many theories for parts of these, but we don't know
the system. Absent an appeal to religion, we don't know of any reason why
AGI cannot be created (it doesn't appear to violate any law of physics),
but we don't know how to create it or what it is, except as a concept.
Check out the videos.
During testing, the table tennis bot was able to beat all of the beginner-level players it faced. With intermediate players, the robot won 55% of matches. It's not ready to take on pros, however. The robot lost every time it faced an advanced player. All told, the system won 45% of the 29 games it played.
The system, built by Boston company Perceptive, uses a hand-held 3D volumetric scanner, which builds a detailed 3D model of the mouth, including the teeth, gums and even nerves under the tooth surface, using optical coherence tomography, or OCT.
This cuts harmful X-Ray radiation out of the process, as OCT uses nothing more than light beams to build its volumetric models, which come out at high resolution, with cavities automatically detected at an accuracy rate around 90%.
At this point, the (human) dentist and patient can discuss what needs doing - but once those decisions are made, the robotic dental surgeon takes over. It plans out the operation, then jolly well goes ahead and does it.
Check out the videos at the above url.
We help brands monitor, understand, and optimize visibility across all major LLM Platforms