Epimorphism is a digital simulation of video feeback, a recursive process wherein a video animation is fed back into itself. Video feedback is a pretty straightforward concept. If you take a webcam and hook it up to your computer and then point the webcam at your screen, you get an infinite recursive loop. In the display, you see a picture of the display. In that display, you see another picture of the display...
This project is based on a similar principle, but takes things a few steps further. Normal video feedback is interesting enough but it lacks in precision, and is fairly limited in what you can do with it. Epimorphism generalizes the whole process by simulating it; a virtual camera which is pointed at a virtual screen. We draw something on the screen, and then capture the result. We bend and twist the picture in a bunch of mathematically interesting ways, and then draw this picture back on the screen, blending it with the previous result. We then take a picture of that, bend and twist it again, draw and blend that result, repeating ad infinitum..
Epimorphism is a steadily evolving stream of video feedback system. The various ways in which we can bend and twist and process the image and the various images we seed into the system are constantly morphing into each other. There are so many possible variations of how this happens that its practically impossible for the system to ever repeat - Epimorphism exists in a state of perpetual, endless variation.
The current iteration of the project is written largely in PureScript , a dialect of Haskell which transpiles to javascript. It is an HTML5 application which runs on virtually all platforms, including tablets, and many cell phones. The project is open source and can be found here. The PureScript component of Epimorphism acts as the front end for a parallel / high performance computing application which runs directly on the clients graphics hardware via pixel shaders & WebGL. It is a highly parallel application, wherein the color value of each pixel is computed in its own thread directly on the graphics hardware, not on the cpu. The application is a parameterized system, and the default mode of operation is a random walk through the parameter space; numerical parameters, as well as component functions themselves are animated and interpolated by the front end of the software. (push ~ while the application is running if you're curious :) If you're interested in contributing/collaborating, please get in touch!
The software has been in development in some form for ~16 years. It originally started off as a geodesic dome covered in leds at Burning Man(in 2001!) and some how organically morphed into this. Its been in a somewhat recognizable state for about 11 years. Significant near term plans for the software include audio responsiveness, a more detailed & user accessible creation UI, and implementing machine learning algorithms to more adeptly navigate the phase space of the software, possibly even tailoring the results to users individual tastes. Generalization of the system to 3 dimensions, using VR, is also currently conceptually in development.