Codes May Be Written In Natural English In The Future We’ve already discussed the repercussions and effects of the rise of AI-powered chatbot ChatGPT, from allowing students to cheat on their essays, to creatives getting laid off and replaced by these applications– the effect is both insane and really disheartening.On the other side of the coin, the application, along with the potential it has, shows just how much more it can actually do: writing code. Many computer scientists see a future where everyone can code by using our natural language. This is because AI can actually translate natural language into code. Additionally, it can also auto-complete, debug, and suggest. A manifestation of this future was released in August 2021 through OpenAI’s Codex. This is a private beta tool that can translate natural language commands to code. It was trained on the same data as the ChatGPT. According to OpenAI co-founder and CTO Greg Brockman, “beginning of a shift in how computer software is written.” Image credit: OpenAI via Vice#ChatGPT #AI #computers #coding #naturallanguage #artificialiintelligence
Ryan Reynolds vs. The ChatGPT AI The results were surprising, to say the least. In an interesting turn of events, Ryan Reynolds turned to OpenAI’s generative AI chatbot ChatGPT to see what it would churn out when asked to create an advertisement for a mobile service company.Now, for those unaware of AI programs, ChatGPT is known for its incredible ability to answer questions, write essays, debug code, and even create an entire script based on a YouTuber’s previous content. Maybe that’s what inspired the actor to make the video. Who knows, really? In the new spot, also posted on YouTube, Ryan Reynolds asked ChatGPT to make a commercial for Mint Mobile in his voice. He even has some conditions for the script to be deemed acceptable. According to the actor, the resulting ad has to have a joke, a curse word, and it has to let people know that Mint’s holiday promo is still going [at that time]. The program certainly did its best, with Reynolds sharing that it sounded “eerie” and “mildly terrifying.” Check out the video above to see how it all unfolded!#advertisement #RyanReynolds #MintMobile #AI #spot #ChatGPT #OpenAI
AI Generates Victorian-Era People Alright, it’s best for us to put a disclaimer here: a computer is not spitting out actual human people. It’s simply portraits. Although the idea of an AI actually generating living creatures is terrifying, and sounds like the next big sci-fi series (or book, we’re not picky).Midjourney is an AI image generator that is capable of creating ultra-photorealistic images. Photographer Mario Cavalli decided to play with this application, and subsequently released a set of images that certainly took us by surprise. These photographs feel like a memento of a lost period of time, as they showcase different portraits of people in very realistic moments and poses. One would think they’re actually real. According to Cavalli, these images were straight from the machine learning tool. No Photoshop was required to make them stand out. “Midjourney v4 came out recently, I think it is currently the best [image generator] around,” Cavalli shared to PetaPixel, praising the quality of the generated photographs.In order to get these wonderful, hyper-realistic results the photographer built on and adapted text prompts created by Everton Lisboa and Ben Roffelson. These words are used in tools such as Midjourney to describe what image the user wants. For reference, Cavalli used the phrases “sharp focus,” “10mm lens,” and “wet collodion photography.”Image credit: Mario Cavalli on Midjourney#photography #artificialintelligence #AI #MarioCavalli #Midjourney #portraits 
This Ugly Sweater Will Make You Invisible to AIComputer science students at the University of Maryland recently published a paper about their efforts to create a garment that will prevent artificial intelligences from easily recognizing the user as a human. It’s called the Invisibility Cloak.If I understand their project correctly (and that’s a big ‘if’), they were able to determine what visual qualities allowed an AI to detect a human, then used that data to create an image that the AI would be strongly disinclined to recognize as a human. Printing this image on a sweatshirt would usually induce the test AI to skip over a person wearing that sweatshirt.-via Hack A Day​#artificialintelligence #invisibility #ai #invisibilitycloak #sweater #uglysweater
The Follower: Artist Used AI to Find Instagram Photo Moments as Captured by Surveillance CamerasSurveillance cameras are good when used to prevent crimes from happening. They can also play an essential role in capturing said crimes (and identifying perpetrators) when they happen. However, the same cameras can also be used for nefarious purposes, like secretly tracking people's movements. And with privately installed surveillance cameras spread in public places worldwide, monitoring persons of interest has never been this easy.To demonstrate the dangers of our current surveillance technology, Dries Depoorter created an art project called "The Follower." True to its name, The Follower would zero in on an unsuspecting Instagram user and then piece the Instagram photo together with footage from a nearby surveillance camera.Depoorter's inspiration for the project came as he watched a live feed of the New York Times Square wherein he saw a woman spending a lot of time taking photos of herself (most likely to capture that perfect shot.) Depoorter thought that the woman was probably an influencer, so he scoured Instagram photos that were geo-tagged to Times Square. Unfortunately, he found none. But this gave him an idea: he could combine people's Instagram photos and footage from cameras made available to the public.​One of Depoorter's unsuspecting subjects was David Welly Sombra Rodrigues. One of his friends sent him a news article about Depoorter's The Follower, and he was surprised to see that he was, unknowingly, filmed.Unfortunately, Depoorter's YouTube video was already taken down because of a copyright claim by EarthCam, a company that streams webcam content on the Internet.Depoorter, however, states that his project is not about companies that make such things possible. Rather, his point is "there are many unprotected cameras all over the world."Whether we like it or not, we can be monitored, whether by an individual, or by an organization.Depoorter says it best. "If one person can do this, what can a government do?"(Image Credit: Dries Depoorter/ EarthCam)#AI #ArtificialIntelligence #EarthCam #TheFollower #Privacy #Surveillance #Art #Technology
Meet Loab, an AI-Generated Demon that Spontaneously Emerged and Now Haunts Many AI ImagesYou’re probably gonna need some bleach to wipe out that image from your eyes. If not, then kudos to you and your mental fortitude. The image above is from a Swedish musician called Supercomposite. The person started a thread on his Twitter account, sharing the story of how he might have found “the first cryptid of the latent space.” I discovered this woman, who I call Loab, in April. The AI reproduced her more easily than most celebrities. Her presence is persistent, and she haunts every image she touches.Well, the image looks like a grotesque, horrifying woman that can either look like a woman who’s suffering or a mythological being that can be classified as a demon or a weird eldritch entity. This woman is called Loab by her creator, Supercomposite.The musician shared that the “demon” spawned after he was doing some experimentation with artificial intelligence. He was playing with negative prompt weights, which are commands fed into the AI. The AI will then ensure that it will churn out the most different image from the prompt. The magic words that created Loab were “Brando::-1.” Supercomposite wrote that he only wanted to see if the opposite of the Brando logo would be a picture of the American actor Marlon Brando.  “I typed “DIGITA PNTICS skyline logo::-1” as a prompt. I received these off-putting images, all of the same devastated-looking older woman with defined triangles of rosacea(?) on her cheeks,” he further explained. After being scared and kind of amazed, the musician has continued to generate more images of Loab, which you can see in his mega-thread here. Image credit: Supercomposite/Twitter#AI #artificialintelligence #art #experimentation #horror #woman #Loab #Supercomposite #Twitter
Missing Person Posters Brought to Life by AIHaving a loved one missing is one of the most painful experiences in a person’s life. Now, thanks to technology, posters to find missing children are being upgraded using AI technology. The tech allows standard photos to be converted into 3D images of the missing person, complete with smiling faces. These enhanced posters are already being put up on billboards across London, according to Evening Standard.Behavioral scientists say that 3D images increase the likelihood of passers-by engaging with the notice, therefore upping the chance of reported sightings. The posters are also updated - the word “missing” has been replaced with the more active phrase “help find”.Image: Evening Standard#missingperson #poster #AI #artificialintelligence
Robot Uses AI to to Peel a BananaPeeling a banana is an effortless task for us humans. The same could not be said for robots, however. These machines find manipulating deformable objects problematic, as this activity requires knowledge of the type of force and dexterity needed. There is a solution to this, though, and that is by using human demonstration data. This group of researchers from the University of Tokyo did just that. After feeding the robot with the said data, it was able to pick up a banana from a table and peel it successfully, all in less than three minutes — a fast time for a robot. But there is one problem with this system. It needs "quite a lot of expensive GPUs." Nevertheless, it will be a useful system. Not just in peeling bananas but in other activities that require fine motor skills. (Image Credit: futuretimelinedotnet/ YouTube) #Bananas #AI #ArtificialIntelligence #DeepLearning #Robots
DALL-E 2: AI Creates Realistic Images From Text DescriptionsEarlier today, OpenAI research laboratory founded by Elon Musk, Sam Altman and others, released the second iteration of an artificial intelligence tool called DALL-E 2. The new AI tool can create and edit images from natural language inputs.Ask DALL-E 2 to draw "teddy bears mixing sparkling chemicals as mad scientists in a steampunk style", and boom: you get the image above.The rapid advancement of artificial intelligence is one of those things that we hear about often but don't grasp at a gut level. After all, human brains don't comprehend Generative Pre-trained Transformer 3's capacity of 175 billion machine learning parameters (yeah, don't worry about it), but we understand the nascent power of AI when we see it generate photorealistic images from text inputs.Just a little over a year ago, OpenAI released DALL-E - the first iteration of the text-to-image AI tool (The name is a combination of Salvador Dali and Pixar's WALL-E). And while it was pretty good, you can definitely tell that the images are computer generated.In a blog post introducing DALL-E 2, Altman shows us what a difference one year can make in AI research. "It’s a reminder that predictions about AI are very difficult to make. A decade ago, the conventional wisdom was that AI would first impact physical labor, and then cognitive labor, and then maybe someday it could do creative work. It now looks like it’s going to go in the opposite order."Take a look at some of the amazing images that DALL-E 2 created below.
It Took AI Less Than 6 Hours to Invent 40,000 Potentially Lethal ToxinsThis feels like the start of the real-life adaptation of Resident Evil, to be honest. I hope it doesn’t happen, though!An AI created by scientists at Collaborations Pharmaceuticals Inc. was put into a “bad actor mode” to see how easily it could be abused as a biological weapon. It certainly did its part well, developing and inventing 40,000 potentially lethal molecules in less than six hours. The machine’s original purpose was to search for helpful drugs, following the company’s goal of finding drug treatments for rare diseases. The researchers published their findings in the journal Nature Machine Intelligence.According to Fabio Urbina, a senior scientist at the company and the lead author of the study, they used datasets of molecules that have been tested for their toxicity to train the AI. The AI  learned how to make toxins from the information it collected from those datasets. The researchers saw the model producing molecules similar to chemical warfare agents in just a short period of time. Urbina shared to The Verge that the most concerning thing that they learned from this study was how easy it was for artificial intelligence to create just from the information available. “If you have somebody who knows how to code in Python and has some machine learning capabilities, then in probably a good weekend of work, they could build something like this generative model driven by toxic datasets. So that was the thing that got us really thinking about putting this paper out there; it was such a low barrier of entry for this type of misuse.” Urbina stated. Image credit: Urbina, et.al #toxins #artificialintelligence #ai #science #CollaborationsPharmaceuticals
Pac-Man Shaped Xenobots are Lab-Made Living Robots that Can Replicate ThemselvesXenobots are these interesting entities created by scientists just a couple of years ago. Made out of stem cells from frogs and built according to AI-created blueprints, these can knit themselves into small spheres and move around lab dishes. But scientists found something even more interesting about xenobots a few months ago. As it turns out, they can self-replicate, and they do so by moving.Xenobots, according to study co-author Douglas Blackiston, find loose, "sort of like robotic parts" in their environment, and they cobble them together. The result from these cobbled parts is a new generation of xenobots. Blackiston and the other researchers called this reproduction method by movement "kinematic self-replication."​Generally, the spheroid xenobots can only create one generation before they die out. Scientists, however, helped the xenobots spawn to up to four generations through the use of an AI program that predicted the optimal shape a progenitor xenobot should have — a C-shape (or a hungry Pac-Man).Kirstin Petersen, an engineer who studies groups of robots, describes this as an "incredibly exciting breakthrough." He points out the possible use of xenobots in biomedicine and therapeutics.(Image Credit: Douglas Blackiston and Sam Kriegman)#Xenobots #Robotics #Engineering #Reproduction #Engineering #PacMan #Biomedicine #AI
AI-Powered Simulations Let Robot Cheetah Teach Itself How to Run Faster Than EverA robot developed at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) has successfully broken the record for the fastest run ever recorded.The unique aspect about this android cheetah was that it wasn’t programmed to run at an incredible speed, it was tasked to figure out how to run that fast through trial and error. Usually, programming machines involve humans doing all the work. Humans typically install precise instructions on what to do and how to do it. According to Gabriel Margolis and Ge Yang, the problem with this approach is that it isn’t scalable. A huge chunk of time is needed to manually program a robot to operate in many different environments. The robot cheetah is a manifestation of experts attempting to create a robot that functions through a learn-by-experience model. Through the project, the robot was able to hit a top speed of 3.9 meters per second, or roughly 8.7 mph, when sprinting. Check out MIT’s video on the project and its results below. image credit: MIT #robotics #MIT #research #AI #reinforcementmodel #programming 
Desolate Civilisation: Collage of AI Artworks that Look Like the Mona Lisa When Put TogetherWhat if Da Vinci’s popular masterpiece was recreated through a high-quality yet dystopian-looking collage? Look no further as @TDRAW_Ai_Art published a collage of AI artworks titled Desolate Civilisation: Collected Fragments that show the silhouette of the Mona Lisa through different illustrations.Shared in a tweet by NightCafe Studio, the artwork features nine different panels of cities, some in the sky,  and some perched high on mountaintops. Aside from the cities forming the iconic silhouette of the lady in Da Vinci’s painting, another noticeable aspect of this collage is that it also tries to mimic the color scheme of the original artwork. Possibly trying to evoke the familiarity of the Mona Lisa. Image credit: @TDRAW_Ai_Art via Nightcafe Studio#artwork #AI #NightcafeStudio #collage #recreation #MonaLisa
AI Went Head to Head Against an F-16 Pilot in Simulated Aerial Combat and Beat the Human 5-0Aerial warfare in the 21st century and beyond might look more like a game simulation rather than a physical dogfight. The AI algorithm controlling the chasing plane will have changed the face of war forever.In August 2020, the US Defense Department’s research agency, said that an algorithm had defeated a human pilot in simulated aerial combat. The best of eight competing AI pilots was matched against an F-16 pilot in five simulated dog fights. The AI beat the human 5-0.In 2021, China ‘s own AI beat a human pilot from its own Air Force. While the pilot won in earlier face-offs, the AI soon learned from each encounter and by the end it was able to defeat him.Militarized AI will bring many changes. Aircraft design can be designed to perform maneuvers no human can withstand. Automated combatants can also save years of training skilled enough fighter pilots.With these rapid advancements, 2022 will show us that future warfare will be a matter of skillful coding rather than courageous flying.Image:U.S. Air Force photo by Master Sgt. Andy Dunaway/Released/Wikipedia​#AI #artificialintelligence #FighterPilot #AerialCombat #F16 #DogFight
100 Billion Facial Photos: Facial Recognition Company Clearview AI Aims to Identify Almost Every Human on EarthIs this venture too ambitious or is it just straight-up problematic? Regardless of the potential privacy risks, we can all admit that this company has given itself a high goal to achieve. In a December 2021 financial presentation for its investors, facial recognition company Clearview AI announced that it aims to collect 100 billion photos to identify every human on Earth.While this announcement seems just a motivator for investors to provide funding for the company, the presentation claimed that Clearview already has 10 billion images and is adding 1.5 billion images a month. How did the company get these images? Well, according to the Washington Post, it’s all thanks to social networks. "Clearview has built its database by taking images from social networks and other online sources without the consent of the websites or the people who were photographed. Facebook, Google, Twitter, and YouTube have demanded the company stop taking photos from their sites and delete any that were previously taken. Clearview has argued its data collection is protected by the First Amendment," the Post detailed. Yikes. Image credit: Wikimedia commons #artificialintelligence #ai #facialrecognition #ClearviewAI #images #photo
People Find Fake AI-Generated Faces More Trustworthy Than Real Human FacesWe may recognize deepfakes as memes and internet gags, all in the name of fun. We are also pretty good at telling it apart from the real thing due to the uncanny valley effect of the fakes’ hollow eyes. But what if technology grows so enhanced that it becomes indistinguishable? What repercussions could come from that?A new study finds that people are increasingly less able to differentiate between real faces and virtual ones created by artificial intelligence. It is also highly possible that humans find the AI-generated face more trustworthy than an actual visage.This is raising concerns that the technology is in danger of being used for all kinds of misdeeds : disinformation campaigns for political gains, creation of false porn for extortion, and of course, frauds. Although currently the technology is not flawless enough for such impact, the experts agree that it may well be very soon.Image: Nightingale SJ and Farid H/PNAS Image caption: Images ranked from most (top and upper middle) to least (bottom and lower middle) accurately classified as real (R) and synthetic (S). #deepfake #face #AI #artificialintelligence #trustworthiness
AI Can Identify Artist's Brushstrokes with over 96% AccuracyThere was a time when art connoisseurs were regarded as the ultimate go-to source when one sought to confirm the authenticity of a piece of artwork. Their ability to see what the untrained eyes couldn’t was seen as mysterious and even elusive, something of a higher plane that only a few were privy to.But now, the advent of AI might change the game completely. Recently, AI analysis performed by Art Recognition determined that “Samson and Delilah” (c.1609-10), a painting attributed to Peter Paul Rubens that was sold for £2.5 million in 1980, was confirmed with 91% of certainty to not be of the artist’s handiwork. Image: Samson and Delilah/Peter Paul Rubens/The National Gallery
AI Scientist Claimed that Artificial Intelligence is Becoming "Slightly Conscious"You may take this with a grain of salt, because I sure am.Ilya Sutskever, the chief scientist of the OpenAI research group, claimed that artificial intelligence is slowly becoming conscious. In a tweet, Sutskever stated that “it may be that today’s large neural networks are slightly conscious.” According to Futurism’s Noor Al-Sibai, this take is unusual for someone in the scientist’s position. The widely accepted assumption about AI is that the technology still falls short of human intelligence and is nowhere close to being conscious. The tweet is the first time the scientist claimed the consciousness of AI is a thing. It may be possible that Sutskever was joking, or he knows something else we don’t. Only time will tell, I believe! Image credit: Wikimedia commons​#artificialintelligence #consciousness #computersentience #AI
Artist Used AI to turn Buzzfeed Headlines Into Horrifying PicturesGenerative artist Max Ingham, also known under the pseudonym Somnai, created images from infamous and memorable BuzzFeed headlines. The artist was “inspired” to generate images after an interaction with Max Woolf, a BuzzFeed data scientist. Woolf quipped that he needed to look into how the Contrastive Language-Image Pre-Training (better known as CLIP) neural network would handle memorable headlines. Ingham did the job for him, and they were kinda creepy.As seen in the photo above, the disgusting and nightmarish pink Kraft mac and cheese and the weird and abstract-looking baby that the network generated from mashing up Grimes and Elon Musk are unsettling. However, they also look like images you'd see in a museum.Image credit: @Somnai_dreams via Twitter#art #AI #aritificalintelligence #technology #Buzzfeed #CLIP
"Hi Toilet": Voice-Operated Tokyo Public ToiletPublic restrooms are probably some of the most unhygienic places a person can think of. From stepping on the flush handle to opening the door with toilet paper or pushing it with our elbows, we just can’t afford to make contact with anything that might make us sick… or worse...But this innovative restroom from Tokyo, Japan solves that problem. This is Hi Toilet and it’s the latest release of the Tokyo Toilet Project. Designed by Kazoo Sato, the chief creative officer at advertising agency TBWA\Hakuhodo, Hi Toilet is powered by AI technology which lets us enter and do our business without ever having to come in contact with anything.With a simple voice greeting, we can then instruct the restroom to perform tasks that would otherwise need our touch like turning on the tap, flushing the toilet, or even playing some soothing music.We need more of Hi Toilet just because we can’t help ourselves with needing to go to public restrooms sometimes. Sure is exciting to have our very own!Photo: Courtesy TBWA#toilet #HiToilet #TokyoToiletProject #AI
AI Creates Photorealistic Portraits of Historical FiguresDutch photographer and digital artist Bas Uterwijk gives us a glimpse of how historical figures would have looked like through these amazing reconstructions made possible through the use of a neural network. The photo above is the artist’s reconstruction of Nefertiti, the great royal wife of Pharaoh Akhenaten. Nefertiti is one of the newest additions in Uterwijk’s series, alongside figures from the Renaissance, 18th-century Europe, and other time periods.To create these portraits, Uterwijk uploads numerous references of the person's likeness to the AI applications. Then, he makes small adjustments to the program until he is satisfied with the result. “These ‘Deep Learning' networks are trained with thousands of photographs of human faces and are able to create near-photorealistic people from scratch or fit uploaded faces in a ‘Latent Space' of a total of everything the model has learned,” Uterwijk explains. “I think the human face hasn't changed dramatically over thousands of years and apart from hairstyles and makeup, people that lived long ago probably looked very much like us, but we are used to seeing them in the often distorted styles of ancient art forms that existed long before the invention of photography.”Uterwijk also bases some of his recreations from paintings and sculptures, like his reconstruction of David (which is based on Michelangelo’s sculpture of the biblical figure).Neural networks truly are a technological marvel.(All Images: Bas Uterwijk)#NeuralNetwork #AI #ArtificialIntelligence #Reconstruction #Art #Photorealism #History
Samsung’s Robot Chef Brings Automated Assistance to your Kitchen! Samsung has introduced the Samsung Bot Chef, a machine that can help people prepare their meals!According to the official press release, the Bot Chef can ‘read, understand, and divide up the tasks in regular recipes, and use tools that you normally use.’ It uses sensors to look and find things in the kitchen, and if it can’t find or reach for something, the machine will ask for help. Now that’s like having a companion in the kitchen!This product concept is intelligent enough to slow down or stop completely if it detects a person’s presence near it, and it will wait for the person to move away before continuing its tasks. In addition to having multiple sensors, the Bot Chef has two Saram arms that hold and manipulate different kitchen tools. One of the highlights of Samsung’s newest machine is that it can learn new skills by downloading them from ‘a skills ecosystem.’ This means that it can learn how to find and use non-smart appliances to help people in the kitchen, from blending a soup to making a cup of coffee-- the Bot Chef holds a great deal of potential!Image credit: Samsung#Samsung #BotChef #Machine #Robot #Technology #Kitchenware #Cooking #ArtificialIntelligence #AI 
Mark Rober Created a World Record Domino Robot That Sets 100,000 Dominoes in 24 HoursMark Rober proved to the whole world that he is the king of dominoes by setting up a hundred thousand of them in just 24 hours. How did he do it?Here’s what he had to say to his rival domino queen Lily Havish about this feat, “I suck at dominoes, Lily, but I’m good at engineering which means I’m actually really good at dominoes.”Engineering! Wow! What can’t we do with technology, right? As it turns out, Rober engineered a robot to set up dominoes like no one else can. Now, this robot he named “Dom” holds the world record for being the fastest to arrange 100K dominoes.So it’s not really Rober but actually Dom who did all the work, or is it the other way around? So mind-boggling! What do you think?#dominoes #engineering #domino #robot #worldrecord #AI #MarkRober #LilyHavish #ArtificialIntelligence #machines #programming
AI Spots Shipwrecks From the Ocean's Surface and the Air with 92% AccuracyFinding shipwrecks may sound like stuff of movies, but it's an important part of naval research by the Navy. They're interested in finding shipwrecks that may help shed light on human history, including trade, migration and war.Unlike the movies, finding shipwrecks usually don't involve a map with a bloody X marking the location of the bounty - but science has the next best thing: sonar and lidar imageries of the seafloor.Leila Character of The University of Texas at Austin and colleagues, in collaboration with the United States Navy's Underwater Archaeology Branch, has used a machine learning artificial intelligence to spot shipwrecks off the coast of mainland USA and Puerto Rico.Character wrote in The Conversation:The first step in creating the shipwreck model was to teach the computer what a shipwreck looks like. It was also important to teach the computer how to tell the difference between wrecks and the topography of the seafloor. To do this, I needed lots of examples of shipwrecks. I also needed to teach the model what the natural ocean floor looks like.Conveniently, the National Oceanic and Atmospheric Administration keeps a public database of shipwrecks. It also has a large public database of different types of imagery collected from around the world, including sonar and lidar imagery of the seafloor. The imagery I used extends to a little over 14 miles (23 kilometers) from the coast and to a depth of 279 feet (85 meters). This imagery contains huge areas with no shipwrecks, as well as the occasional shipwreck.Character's computer model has an accuracy of 92% and she now hopes to extend the model to spot shipwrecks from around the world.#shipwreck #sonar #lidar #seabed #oceanfloor #archaeology #underwaterarchaeology #USNavy #artificialintelligence #AI #machinelearningImage: Shipwrecks off the coast of Washington at the depth of 25m. Character L, et al. (2021) Remote Sens. 13(9) 1759.