🏄🏻♀️
🎱
🎰
🍇
🔮
- I thought about using Spark AR to create a filter that would mimic body movements. Having a similar interface to unity, I knew that I would be able to work with the 3D scan of my cellform.
- However, knowing that it would restrict me to posting my work on Instagram, I decided against it and looked for other options which allowed me to experiment with 3D filters.
- Unity MARS was another option that allowed for full-body tracking within an environment, which would be exciting to create and develop with... However, it's not compatible with WEBGL builds which could cause trouble when displaying my work.
- I was hesitant to use PoseNet with P5.js as my past experimentation with it didn't work out as planned but I decided that I wanted to work with an active AI model, which also allowed for more creative freedom and easy access for web users.
- After conducting my initial research into artificial intelligence and how it weighs up against human intelligence, I was inspired to expand my practice into the realm of human & machine collaboration.
- In short, I concluded that the types of human intelligence (identified by American psychologist Howard Gardener) can be directly compared to different types of deep learning neural network structures.
- I began exploring this phenomenon of supposed 'imitation' through visual experimentation with online GANs which generated images from words and would actively work to imitate my inputs
🔒
💻
🐚
- After having read Dice world by Brian Clegg I became fascinated by the idea of evolution by chance and existence in a random universe. I also wanted to see how my own thoughts and actions factor into randomisation, can I truly create something random
- Lauren Gray :) (2022)
🐙
🍓
🎟
🎯
Click to view text
🌷
- Wanting to understand more about working with artificial intelligence to generate pieces of work, I started to think about creative possibilities outside the world of GANs... thinking about displaying the process of imitation, not just the output.
- After acquiring a bunch of redundant second-hand appliances that fell into my possession by chance, I knew I wanted to present an assisted readymade installation that simulated an observation of intelligence. Choosing to present a live science experiment that viewers could interact with in a physical space.
without guidance? Or am I consciously choosing to place things in a certain way? Starting with a physical textile piece and then playing about dropping materials and adding them where they land.
- Humans harbour a natural ability to recognise patterns, even if what's presented is completely randomised. I wanted to use the visuals from this body of experimentation as my own reaction to seeing my work change became quite analytical and mathematic attempting to draw conclusions from patterns I began to observe.
- This body of work almost became its own lifeform, thinking for itself in the way in which it chose to change and evolve. I knew that I wanted this piece to represent the synthetic intelligence I was referencing throughout my research and my practice.
- I ran into a few issues importing 3D models into P5.js but, with help from countless tutorials... I was able to watch my cellform slowly assemble itself into new forms.
- Before I completed this sketch I wanted my model to remain in one position till my webcam detected motion, but as I witnessed its evolution throughout the development of the sketch, I thought that it needed its own motion and thought pattern.
- Almost letting the code decide for itself, I played about with how the cellform moved and the subtle changes it made when a body was detected.
- Running the sketch on a test installation allowed me to identify bugs in my code and further develop the project to its full potential! Though my sketch worked, the cell was only moving across 2D which created some interesting patterns but left part of the screen bare...
- Finding the problem and adding another cellform (created in blender with the same 3D scan!) I was able to move them across 3D!
- I replaced the first model (tracked to the nose point) with the second one I created in blender (tracked to the wrist point) and separated their movement paths!