ELLi is a project that gives a more human-like face to Artificial Intelligence. Our goal is to help her give an answer to one of the most popular questions people ask Google with the use of Artificial Intelligence.
To achieve this, we take the photos and the texts that represent what love is to you, and analyse them through automated learning tools, in order to identify patterns and common characteristics.
In this stage, Google’s Could Vision API uses the most advanced Artificial Intelligence to identify what is shown in each photo. Next, the elements of each photo are automatically translated by Google’s Cloud Translation API with the use of Artificial Intelligence.
Then, the algorithm Data Mining K-means finds more photos with similar content and organises them in sets, adding them in a multidimensional cloud, and via the algorithm of automatic learning t-Distributed Stochastic Neighbour Embedding (t-SNE) the multidimensional scaling for the data analysis takes place. That is to say, we depict the 1000 cloud dimensions in a human-friendly 3 dimensional form so that anyone can navigate through the results.
It may sound complicated, but this way a concept only known to humans can be encoded in a way that can be recognised and processed by a machine.
So, what is love? Is it a touch? A sunset? A colour? A taste? We will watch the final conclusions together and we will see them gradually getting a shape in this site. Don’t forget to answer and visit again to see how the results develop.