Neural networks have proven effective at solving difficult problems but designing their architectures can be challenging, even for image classification problems alone. Evolutionary algorithms provide a technique to discover such networks automatically. Despite significant computational requirements, we show that evolving models that rival large, hand-designed architectures is possible today.
Writing with the machine
Finally got this running: snappy in-editor “autocomplete” powered by a neural net trained on old sci-fi stories.
I’d been reading about #deep_learning for a couple of years, but it wasn’t until a long conversation earlier this year with an old friend (who is eye-poppingly excited about these techniques) that I felt motivated to dig in myself. And, I have to report: it really is a remarkable community at a remarkable moment. Tracking papers on Arxiv, projects on Github, and threads on Twitter, you get the sense of a group of people nearly tripping over themselves to do the next thing — to push the state of the art forward.
That’s all buoyed by a strong (recent?) culture of clear #explanation.
Watch This Open Source AI Learn to Dominate Super Mario World in Just 24 Hours - Singularity HUB
In a new YouTube video, Seth Bling explains the magic behind software he developed to learn how to play Nintendo’s Super Mario World.
“This program started out knowing absolutely nothing about Super Mario World or Super Nintendos,” Bling says. “In fact, it didn’t even know that pressing ’right’ on the controller would make the player go towards the end of the level.”
Using #Waifu2x to Upscale Japanese Prints
One tool that I came across yesterday is called Waifu2x. It’s a convolutional neural network (CNN) that is designed to optimally “upscale” images (taking small images and generating a larger image). The creator of this tool built it to better upscale poorly-sized Anime images and video. This
(basse def ; resize avec mac ; resize avec waifu2x)
à noter @baroug : comme ce programme commence par subir un #apprentissage sur un certain type d’#images, ça marche mieux avec ces images-là (en l’occurrence des images avec des aplats) qu’avec des photos texturées.