coming project

0 title GLOBAL AI HACKATHON date 2016.11.24
file hit 4,698


 



 
GLOBAL AI HACKATHON
 
■ Date: 2016.12.01.-  2016.12.04
■ Venue: COEX (Creative Korea 2016, Seoul)
■ Host: Ministry of Science, ICT and Future Planning, KOREA
■ Supervision: Art Center Nabi & Korea Foundation for the Advancement of Science & Creativity
■ Partner: IBM Watson
■ Programs: Completion of the prototype from Hackathon with Creative Korea 2016, Hackathon Award Ceremony, Talk Concert, Exhibition
■ Participants:

Goldsmiths, University of London / Creative Computing
Mick GRIERSON, Rebecca FIERBRINK, Hadeel AYOUB, Jakub FIALA, Leon FEDDON
 
New York University / Interactive Telecommunication Program
Gene KOGAN

School of Machines, Making & Make-Believe / The Neural Aesthetic
Andreas REFSGAARD
 
Seoul National University / Biointelligence lab
Byoung-Tak ZHANG, Eun-Sol KIM, Kyoung Woon ON, Sang-Woo LEE, Donghyun KWAK, Yu-Jung HEO, Wooyoung KANG, Ceyda ÇINAREL, Jaehyun JUN,  Kibeom KIM
 
Art Center Nabi / Nabi E.I.Lab
Youngkak CHO, Youngtak CHO, Junghwan KIM, Yumi YU 
 
Georgia Institute of Technology / Center of Music Technology
Gil WEINBERG, Mason BRETAN,Si CHEN 
 
Hong Kong University of Science and Technology / Department of Computer Science and Engineering
Tin Yau KWOK, Minsam KIM 
 
City University of Hong Kong / Department of Electronic Engineering
Ho Man CHAN, Qi SHE




 
Goldsmiths, University of London
 
Title of Project 
Bright Sign Glove
 
Keywords
Assistive Technology, Wearable Technology, Healthcare Innovation, Machine Learning, Gesture Recognition.  
 
Concept / Purpose of the project 
While accessible technology is becoming more and more common in recent times, the efforts to break the communication barrier between sign language users and non-users are yet to leave the world of academia and high-end corporate research facilities. Inspired by and building on Hadeel Ayoub’s research to date, we have  strived to build a wearable, integrated and intuitive interface for translating sign language in real-time.
 
Description of Technology used for this project
The Sign Language Data Glove is equipped with flex and rotation sensors and accelerometers that provide real-time gesture information to a classification system running on an embedded chip. We have used various IBM Bluemix services to perform text-to-speech conversion and language translation. The embedded system include a speaker and a screen to display and speak detected words. 
From a UX viewpoint, we are aiming to create a smooth experience minimizing the setup and maintenance interactions by providing a set of utility gestures. We have used IBM cloud services and local storage to transfer and cache data to enable offline usage. We have built a simple web application to manage custom gestures uploaded from the embedded system to the IBM cloud.
 
Material
IBM Watson Bluemix Text to Speech API and Language Translation API, Raspberry PI Zero, IBM Watson Python SDK, Flex sensors, Gyroscope, mini OLED screen, Mini Speaker, Smart textile gloves
 
Team Bio
Hadeel Ayoub, PhD researcher in Arts and Computational Technology, Jakub Fiala MSci Creative Computing, Leon Fedden, Creative Computing undergraduate. Hadeel, Leon and Jakub are researchers from London, based at  Goldsmiths, University of London and the Welcome Trust and working in the fields of accessible technology, creative computing and machine intelligence. 


 
Gene Kogan & Andreas Refsgaard
 
Title of Project 
Doodle Tunes
 
Keywords
Machine learning / Music
 
Concept / Purpose of the project 
This project lets you turn doodles (drawings) of musical instruments into actual music. A camera looks at your drawing, detects instruments that you have drawn, and begins playing electronic music with those instruments.
 
Description of Technology used for this project
It’s a software application, built with openFrameworks, that uses computer vision (OpenCV) and convolutional neural networks (ofxCcv) to analyze a picture of a piece of paper where instruments have been hand drawn, including bass guitars, saxophones, keyboards, and drums.
The classifications are made using a convolutional neural network trained on ImageNet, and sends OSC messages to Ableton Live, launching various clips playing the music for each instrument.
The software will be made available at https://github.com/ml4a/ml4a-ofx
 
Material
openFrameworks (including mainly ofxOpenCv, ofxCv, ofxCcv, ofxAbletonLive), a stand and attached web camera, some paper, and pens.
 
Team Bio
Andreas Refsgaard and Gene Kogan are artists, programmers, and interaction designers, working with machine learning to create real-time tools for artistic and musical expression.



Seoul National University 
 
Title of Project 
AI Scheduler: Learn your life, Build your life
 
Keywords
Automatic scheduling / Daily life pattern / Deep learning / Inverse reinforcement learning / Wearable device 
 
Concept / Purpose of the project
In this project, we explore the possibility of an AI assistant that will have a part in your life in the future. Being able to predict your daily life patterns is an obvious necessary function of an AI assistant. In this respect, we focus on the theoretical issue of how to learn the daily life pattern of a person. For this hackathon, we developed a system which can automatically recognize the current activity of a user and learn activity patterns of the user, then predict future activity sequence.
 
Description of Technology used for this project
From wearable camera and smart phone data, the system can automatically recognizes user’s current activity, time and location information. For this part, the visual recognition API of Watson and several machine learning algorithms were used. Based on this information, the system learns the patterns of user’s daily life. We devised the learning algorithm based on ‘inverse reinforcement learning’ theory so that the life pattern of the user can be learned properly. Then, the system can generate life pattern of future. Also, as a scheduler, it is desirable for the system to interact with the user and reflect the user’s intention. Therefore, we developed a web-based interface which can interact with the user by natural language. For this, IBM Watson’s conversion service alongside with text-to-speech/speech-to-text was used.
 
Material
TensorFlow, VGG Net, wearable camera, web-based interface (IBM Bluemix Conversation Service, speech to Text Service, Text to Speech Service, Node-Red).
 
Team Bio
We are graduate students at the Seoul National University, Biointelligence Laboratory. We are interested in the study of artificial intelligence and machine learning on the basis of biological and bio-inspired information technologies, and its application to real-world problems. 


Nabi E.I.Lab (Emotional Intelligence Laboratory)
 
Title of Project
A.I interactive therapy
 
Keywords
Artificial Intelligence, Art color therapy(CRR TEST), New media art, Interaction, IBM Watson
 
Concept / Purpose of the project 
A.I. interactive therapy attempts to analyze human psychology and emotion through artificial intelligence. This artificial intelligence system is an interaction type in which the client conducts psychological counseling through direct physical behavior. This system is based on a creative approach to the imagery of art therapy. It is also designed to analyze the emotional stability and inner aspect of human psychology by making full use of the interactive characteristics and visual effects of new media art.
 
Description of Technology used for this project
This project is created based on CRR TEST which is a color psychological analysis method. The test analyzes the mental state of the subject by selecting three out of eight plane figures in order. This project has been completed by grafting artificial intelligence onto this method. First, a UI environment has been made by an application for vertical projection on the floor. A kinetic camera has been installed to allow the user to experience the UI environment in real time so that tracking of human movements can be made for choosing movement values and interacting to the values. A voice feedback system has been constructed for processing specific results and overall operation of the system. This proceeds with IBM Watson's conversation API. For this, a sort of chatbot system is employed, and voice feedback technology makes decision on the progress of the process according to IBM Watson's STT / TTS. The calculation of the result value based on the final selection is also made through artificial intelligence. The language for this application is Python.
 
Material
Hardware : Projector, Kinect v2 camera, Mac pc, Speaker
Software : IBM Watson api, Python, Pykinect2
 
Team Bio
E.I. Lab at the Art center Nabi is a creative production laboratory that researches and tests the contact points of art and technology. E.I. Lab stands for Emotional Intelligence Laboratory and focuses on creating new contents through emotional approach and technical research. The members are New media artist Youngkak Cho, Software developer Youngtak Cho, Designer Junghwan Kim and Interaction designer Yumi Yu. The lab is currently researching and developing fusion projects based on robotics and artificial intelligence technologies. 



Georgia Institute of Technology
 
Title of Project 
Continuous Robotic Finger Control for Piano Performance
 
Keywords
Robotics, deep learning, computer vision, machine learning, robotic musicianship
 
Concept / Purpose of the project 
This project uses deep neural networks to predict fine-grained finger movements based on muscle movement data from the forearm. In particular, we look at the fine motor skill of playing the piano, which requires dexterity and continuous, rather than discrete, finger movements. While this project is most directly applicable to giving musicians with amputations their musical ability back through smarter prosthetics, the demonstration of successful fine motor skills from our deep learning technique allows for promising applications of this method and our novel sensor throughout the medical and prosthetic fields.
 
Description of Technology used for this project
The final deep learning techniques used to successfully demonstrate the continuous robotic finger control was a four layer fully connected network followed by cosine similarity and a softmax loss. Images were normalized before input and batch normalization was used to smooth out the regression results. In post-processing, noisy regression results were further smoothed with a filtering step. To implement this network and run our prior experiments, we used Tensorflow, Torch, and pre-trained networks from Inception to construct our deep networks.
 
Material
Deep learning libraries: TensorFlow, Inception, Torch
Data collection: glove bend sensor, MIDI output from keyboard, muscle sensor
 
Team Bio
Mason Bretan and Si Chen are PhD students working in robotic musicianship and computer vision, respectively. Gil Weinberg is a Professor in the School of Music and the founding director of the Center for Music Technology. They are based at the Georgia Institute of Technology, within the Center for Music Technology and the College of Computing, School of Interactive Computing.
 

City University of Hong Kong & Hong Kong University of Science and Technology
 
Title of Project 
Cognitive DJ
 
Keywords
IBM Watson, tone analysis, conversation, emotion vector, music recommendation
 
Concept / Purpose of the project 
The project aims at developing a novel Cognitive DJ for users based on their emotion. Current music recommendation system requires the user to search the exact music manually and/or is based on user history. However, such existing approach is unable to reflect or meet their emotional needs in real-time. By interacting with Cognitive DJ through text or speech, the system will recommend music based on the current emotion status. The Cognitive DJ can analyze the conversation content and the tone for each user to recommend the right song at the right moment.
 
Description of Technology used for this project
We have established a music database, which includes songs from diverse categories. The IBM Watson tone analysis is applied to the lyrics of each song to derive the emotion vector with 5 different scores, indicating anger, disgust, fear, joy, and sadness. Based on conversation service provided in IBM Watson, we have created our workspace including intends, entities, and dialog. The conversation with the Cognitive DJ is carried through an user-friendly interface. Based on the user’s conversation with the Cognitive DJ, the emotion vector of the user is derived and compared with the emotion vectors of all songs. A song that has closest emotion score with that of the user will be recommended. New recommendations will be provided as the user continues the conversation with the Cognitive DJ.
 
Material
Python, Java, IBM Watson.
 
Team Bio
Qi She is a Ph.D. student from the Department of Electronic Engineering (EE) of City University of Hong Kong (CityU), working on statistical machine learning methods for neural data analysis. Minsam Kim, Sikun Lin, and Wenxiao Zhang are graduate students from the Department of Computer Science and Engineering (CSE) of Hong Kong University of Science and Technology (HKUST), working on machine learning and time series data analysis.
The team’s mentors are Dr. James Kwok from HKUST and Dr. Rosa Chan from CityU. Dr. James Kwok is the Professor of CSE at HKUST working on machine learning and neural networks. Dr. Rosa Chan is currently an Assistant Professor of EE at CityU. Her research focuses on computational neuroscience and neural interface.
 
back to list