#neuralnetwork

DALL-E 2: AI Creates Realistic Images From Text DescriptionsEarlier today, OpenAI research laboratory founded by Elon Musk, Sam Altman and others, released the second iteration of an artificial intelligence tool called DALL-E 2. The new AI tool can create and edit images from natural language inputs.Ask DALL-E 2 to draw "teddy bears mixing sparkling chemicals as mad scientists in a steampunk style", and boom: you get the image above.The rapid advancement of artificial intelligence is one of those things that we hear about often but don't grasp at a gut level. After all, human brains don't comprehend Generative Pre-trained Transformer 3's capacity of 175 billion machine learning parameters (yeah, don't worry about it), but we understand the nascent power of AI when we see it generate photorealistic images from text inputs.Just a little over a year ago, OpenAI released DALL-E - the first iteration of the text-to-image AI tool (The name is a combination of Salvador Dali and Pixar's WALL-E). And while it was pretty good, you can definitely tell that the images are computer generated.In a blog post introducing DALL-E 2, Altman shows us what a difference one year can make in AI research. "It’s a reminder that predictions about AI are very difficult to make. A decade ago, the conventional wisdom was that AI would first impact physical labor, and then cognitive labor, and then maybe someday it could do creative work. It now looks like it’s going to go in the opposite order."Take a look at some of the amazing images that DALL-E 2 created below.
AI Creates Photorealistic Portraits of Historical FiguresDutch photographer and digital artist Bas Uterwijk gives us a glimpse of how historical figures would have looked like through these amazing reconstructions made possible through the use of a neural network. The photo above is the artist’s reconstruction of Nefertiti, the great royal wife of Pharaoh Akhenaten. Nefertiti is one of the newest additions in Uterwijk’s series, alongside figures from the Renaissance, 18th-century Europe, and other time periods.To create these portraits, Uterwijk uploads numerous references of the person's likeness to the AI applications. Then, he makes small adjustments to the program until he is satisfied with the result. “These ‘Deep Learning' networks are trained with thousands of photographs of human faces and are able to create near-photorealistic people from scratch or fit uploaded faces in a ‘Latent Space' of a total of everything the model has learned,” Uterwijk explains. “I think the human face hasn't changed dramatically over thousands of years and apart from hairstyles and makeup, people that lived long ago probably looked very much like us, but we are used to seeing them in the often distorted styles of ancient art forms that existed long before the invention of photography.”Uterwijk also bases some of his recreations from paintings and sculptures, like his reconstruction of David (which is based on Michelangelo’s sculpture of the biblical figure).Neural networks truly are a technological marvel.(All Images: Bas Uterwijk)#NeuralNetwork #AI #ArtificialIntelligence #Reconstruction #Art #Photorealism #History
Researchers Developed a 'Speech Neuroprosthesis' That Converts a Paralyzed Man's Brain Waves to SpeechUCSF neurosurgeon Edward Chang has spent the last decade working on a technology that would allow people with paralysis to communicate even though they're incapable of speech on their own.Now, Chang and his team has succeeded in decoding full words from the brain activity. "It shows strong promise to restore communication by tapping into the brain's natural speech machinery," he said.The first patient in the trial of the study suffered a devastating brainstem stroke 15 years ago which left him paralyzed and unable to speak. Since his injury, he communicated by using a pointer attached to a baseball cap to poke at letters on a computer screen.Chang surgically implanted a high-density electrode array over the patient's speech motor cortex. Then, he and neurology professor Karunesh Ganguly and colleagues recorded 22 hours of neural activity in the patient's brain over several months while the patient attempted to vocalize some words many times.The data was fed into custom neural network models, a form of artificial intelligence, to distinguish and identify specific subtle patterns in the brain activity to detect speech and identify which word the patient was trying to say.The UCSF team found that their system was able to decode words from brain waves of the patient at a rate of up to 18 words per minute with up to 93 percent accuracy.#speech #brain #brainwave #electrode #neurology #artificialintelligence #AI #neuroprosthesis #paralysis #stroke #neuralnetwork #UCSF