This was a test to see if I could quickly create something by collaborating with AI (Disco Diffusion) to firstly generate some guided concepts via text and image prompts, and secondly turn that into something in 3D fairly quickly. I used an input image of a skull I had rendered from a scanned 3D model with a low influence in DD, to give the AI room to dream. The result was something that resembled a calcified angler fish, full of bone and wisp. I generated a depth map from this image and then enhanced it with overlays, this was used to displace a plane in 3DSMax, which I then warped into shape using an FFD deformer and some further displacement. Over to Toolbag for materials and rendering and finally Photoshop/After Effects for post work. This was really rushed, I plan on refining the process, I think I can squeeze more from it.