Neural networks have proven effective at solving difficult problems but designing their architectures can be challenging, even for image classification problems alone. Evolutionary algorithms provide a technique to discover such networks automatically. Despite significant computational requirements, we show that evolving models that rival large, hand-designed architectures is possible today.
Deep-Learning Machine Listens to Bach, Then Writes Its Own Music in the Same Style
The #machine-learning technique is straightforward. Hadjeres and Pachet begin by creating a data set to train their #neural_network. They begin with 352 chorales composed by #Bach and then transpose these to other keys that lie within a predefined vocal range, to give a data set of 2,503 chorales. They use 80 percent of these to train their neural network to recognize Bach harmonies and the rest to validate it.
Un aperçu du fonctionnement de #google_translate (cet article traite du système en production).
La nouveauté #recherche est qu’en mélangeant toutes les langues dans un unique modèle, les résultats sont d’une qualité équivalente tout en utilisant moins de phrases que dans les systèmes où les langues sont uniquement traitées par paires.
Mieux, un système entraîné sur des traductions EN<->PT et ES<->PT saura traduire EN<->ES avec une bonne performance, à la façon du « style transfer » vu sur les peintures.
D’où la question : le modèle ainsi entraîné reconnaît-il des « concepts fondamentaux » du langage ?
We propose a simple, elegant solution to use a single Neural Machine Translation (NMT) model to translate between multiple languages.
“Daddy’s Car”, la première chanson pop composée par une intelligence artificielle
Google unleashes deep learning tech on language with Neural Machine Translation | TechCrunch
Translating from one language to another is hard, and creating a system that does it automatically is a major challenge, partly because there’s just so many words, phrases, and rules to deal with. Fortunately, neural networks eat big, complicated data sets for breakfast. #Google has been working on a machine learning translation technique for years, and today is its official debut.
The Google Neural Machine Translation system, deployed today for Chinese-English queries, is a step up in complexity from existing methods
AI can recognise your face even if you’re pixelated
“Researchers at the University of Texas at Austin and Cornell Tech say that they’ve trained a piece of software that can undermine the privacy benefits of standard content-masking techniques like blurring and pixelation by learning to read or see what’s meant to be hidden in images—anything from a blurred house number to a pixelated human face in the background of a photo.”
The researchers were able to defeat three privacy protection technologies, starting with YouTube’s proprietary blur tool. YouTube allows uploaders to select objects or figures that they want to blur, but the team used their attack to identify obfuscated faces in videos. In another example of their method, the researchers attacked pixelation (also called mosaicing). To generate different levels of pixelation, they used their own implementation of a standard mosaicing technique that the researchers say is found in Photoshop and other commons programs. And finally, they attacked a tool called Privacy Preserving Photo Sharing (P3), which encrypts identifying data in JPEG photos so humans can’t see the overall image, while leaving other data components in the clear so computers can still do things with the files like compress them.
“similar-image search” for satellite photos. It’s an open-source tool for discovering “patterns of interest” in unlabeled satellite imagery—a prototype for exploring the unmapped, and the unmappable.
(...) Terrapattern is ideal for discovering, locating and labeling typologies that aren’t customarily indicated on maps. These might include ephemeral or temporally-contingent features (such as vehicles or construction sites), or the sorts of banal infrastructure (like fracking wells or smokestacks) that only appear on specialist blueprints, if they appear at all.
(...) the Terrapattern prototype is intended to demonstrate a workflow by which users—such as journalists, citizen scientists, humanitarian agencies, social justice activists, archaeologists, urban planners, and other researchers—can easily search for visually consistent “patterns of interest”. We are particularly keen to help people identify, characterize and track indicators which have not been detected or measured previously, and which have sociological, humanitarian, scientific, or cultural significance.
Writing with the machine
Finally got this running: snappy in-editor “autocomplete” powered by a neural net trained on old sci-fi stories.
I’d been reading about #deep_learning for a couple of years, but it wasn’t until a long conversation earlier this year with an old friend (who is eye-poppingly excited about these techniques) that I felt motivated to dig in myself. And, I have to report: it really is a remarkable community at a remarkable moment. Tracking papers on Arxiv, projects on Github, and threads on Twitter, you get the sense of a group of people nearly tripping over themselves to do the next thing — to push the state of the art forward.
That’s all buoyed by a strong (recent?) culture of clear #explanation.
Watch This Open Source AI Learn to Dominate Super Mario World in Just 24 Hours - Singularity HUB
In a new YouTube video, Seth Bling explains the magic behind software he developed to learn how to play Nintendo’s Super Mario World.
“This program started out knowing absolutely nothing about Super Mario World or Super Nintendos,” Bling says. “In fact, it didn’t even know that pressing ’right’ on the controller would make the player go towards the end of the level.”
Using #Waifu2x to Upscale Japanese Prints
One tool that I came across yesterday is called Waifu2x. It’s a convolutional neural network (CNN) that is designed to optimally “upscale” images (taking small images and generating a larger image). The creator of this tool built it to better upscale poorly-sized Anime images and video. This
(basse def ; resize avec mac ; resize avec waifu2x)
à noter @baroug : comme ce programme commence par subir un #apprentissage sur un certain type d’#images, ça marche mieux avec ces images-là (en l’occurrence des images avec des aplats) qu’avec des photos texturées.