#machinelearning

Giga Manga: Google Uses AI to Create Manga Art from Your DoodlesGoogle continues to amaze us by inventing and innovating technology that uses machine learning and artificial intelligence to help us in different aspects of life.One of Google’s new innovations is one that helps people who lack talent, but also have an abundant passion for drawing. The technology is called Giga Manga. It is an online experimental platform that turns your scribbles into manga artwork. It turns your creative doodle into the protagonist of a doujinshi using a set of AI and machine learning technologies that have been trained on over 140,000 high-resolution photographs.​The procedure is straightforward. To begin, make a doodle of any shape: a circle, a squiggle, an outline, or anything else. Based on the 140,000 training photographs, Google will offer you a basic shape of a portrait. Then it's up to us to use our imaginations to fill in certain elements like lines and colors so that Giga Manga can finish the image for us.​Image credit: Android Police​#Google #ArtificalIntelligence #MachineLearning #Manga #GigaManga
Paralyzed Man Posted First Ever "Direct Thought" Tweet Using a Brain ChipSilicon Valley startup Synchron Inc. has developed a unique brain chip. The chip can help people suffering from movement issues, or disabilities, do tasks that require movement by analyzing their brain activity. The company gave the sensor to Phil O’Keefe, a 60-year-old suffering from mobility issues to test its efficiency. According to O’Keefe, he agreed to join the trial to help others with his condition. “If it wasn’t for the trial, I’d be going stir crazy big time,” he said.When he wants to open a document or click a link on his screen, all O’Keefe has to do is to think about tapping his left ankle. That thought will be collected by the sensors in his brain and relayed to a computer through devices in his chest. The signals collected by the brain chip are converted to a mouse click or any other action with the help of machine-learning software. Synchron’s brain chip, called the Stentrode, has the potential to cater to more mobility-related tasks in the future. The technology is in early stages, and its long-term safety still needs to be assessed. According to the company, it has done safety testing to mitigate risks. Image credit: Synchron #science #technology #machinelearning #brainsensor #brainchip #neurology
Life Beneath the Ice: 12 New Species of Jellyfish Under the Antarctic Sea-IceA collaboration from two different researchers has produced a ground-breaking discovery!Postdoctoral researchers Dr. Gerlien Verhaegen and Dr. Emiliano Cimoli banded together to produce a collaborative study concerning the aquatic creatures featured in Dr. Cimoli’s 2018 underwater footage of Ross Sea, Antarctica.The footage in question is full of different jellyfish species. Dr. Verhaegen initially came across Emiliano’s video and was amazed by its quality. “You could clearly distinguish some key morphological features.” The researcher adds. A total of twelve new species were reported in the resulting taxonomic paper. The study is the first to include a training image set for video annotation of jellyfish through machine learning. That’s pretty cool! Image credit: Dr. Emiliano Cimoli#Jellyfish #TaxonimicStudy #MachineLearning #Science #GerlienVerhaegen #EmilianCimoli #UnderwaterPhotography
AI Spots Shipwrecks From the Ocean's Surface and the Air with 92% AccuracyFinding shipwrecks may sound like stuff of movies, but it's an important part of naval research by the Navy. They're interested in finding shipwrecks that may help shed light on human history, including trade, migration and war.Unlike the movies, finding shipwrecks usually don't involve a map with a bloody X marking the location of the bounty - but science has the next best thing: sonar and lidar imageries of the seafloor.Leila Character of The University of Texas at Austin and colleagues, in collaboration with the United States Navy's Underwater Archaeology Branch, has used a machine learning artificial intelligence to spot shipwrecks off the coast of mainland USA and Puerto Rico.Character wrote in The Conversation:The first step in creating the shipwreck model was to teach the computer what a shipwreck looks like. It was also important to teach the computer how to tell the difference between wrecks and the topography of the seafloor. To do this, I needed lots of examples of shipwrecks. I also needed to teach the model what the natural ocean floor looks like.Conveniently, the National Oceanic and Atmospheric Administration keeps a public database of shipwrecks. It also has a large public database of different types of imagery collected from around the world, including sonar and lidar imagery of the seafloor. The imagery I used extends to a little over 14 miles (23 kilometers) from the coast and to a depth of 279 feet (85 meters). This imagery contains huge areas with no shipwrecks, as well as the occasional shipwreck.Character's computer model has an accuracy of 92% and she now hopes to extend the model to spot shipwrecks from around the world.#shipwreck #sonar #lidar #seabed #oceanfloor #archaeology #underwaterarchaeology #USNavy #artificialintelligence #AI #machinelearningImage: Shipwrecks off the coast of Washington at the depth of 25m. Character L, et al. (2021) Remote Sens. 13(9) 1759.