I presented my project to the class for the interim crit. A major piece of feedback I received was about the use of AR. AR is all about keeping elements in spaces through a screen, so as you move your phone around a room, for example, the elements will be in each corner. In my case, I have used AR to present itself only when the Hiro marker is detected.
The use of Hiro markers is entirely due to the constraint of knowledge and time to get this working through computer vision. Whilst it would make more sense for my system to work without these markers, since my brief states my project must be 'close to finished', and 'allow others to use it', I felt that creating an interactive prototype that worked was of utmost importance, therefore using this marker method. Computer vision would have allowed my system to be pointed as a smart speaker, TV and know what data it needs to be shown. This does sound much fancier than my current method, but I used the knowledge and skill I had at the time.
The image above shows how computer vision works. AI detects objects on the screen and tries to identify them. For example, the trees, humans and aeroplane. In my project, these would be a smart speaker, smart plug, TV, lighting. Whist this method would make the whole process of scanning a device and seeing the data more uniform, it is not as straight forward to implement in code. However, I have been experimenting through some prototypes to this method. Since it moves away from aframe, I am no longer restricted to the cube design for showing data leaving and entering the device. I was able to be more creative with this aspect of the project.
I developed a series of designs for a computer vision version of my tool. The image below show some iterations of this.
Comments