Premiered at TED2011 Dancing Atoms is a digital pixel map of Italy's étoile ballet dancer, Roberto Bolle.

This project examines the limits of human beauty and motion through the scanning, motion capture and pixel conversion of one of the world’s most-talented ballet dancers. In collaboration with the University of London Queen Mary, Rapido3D and CMYK+WHITE, the SENSEable team re-imagined Bolle in “Dancing Atoms” with the aim to replicate and study the body in new ways. As scanning technology and software tools allow us to make digital copies of ourselves, a convergence between bits and atoms is generated.





DOWNLOADS & PRESS

  • Download Body Scan Photos
  • Download Motion Capture Photos
  • Download Digital Avatar Photos
  • Download Videos
  • Download Press Release
  • Download material can be used if it is duly credited and SENSEable City Lab receives a PDF copy of the publication before and after print.

    TEAM

    Senseable City Lab

  • Carlo Ratti, Director
  • Adam Pruden, Team Leader
  • Starring Roberto Bolle

  • - étoile ballet dancer
  • http://www.robertobolle.com/
  • Motion Capture by

  • The Interaction, Media and Communication Group, Queen Mary University of London
  • Pat Healey, Co-Director
  • Stuart Battersby
  • Arash Eshghi
  • Nicola Plant
  • 3d scanning by

  • Rapido3d
  • Kev Stenning
  • Animation by

  • CMYK+WHITE
  • EunSun Lee, Creative Director
  • Sanders Hernandez, 3D Generalist
  •    

    In a single 360 degree sweep, over 200,000 points of Roberto’s face were captured by Rapido3D’s Cyberware colour PS head/face scanning equipment to create an extremely detailed three-dimensional model. Through a technique called triangulation, a laser light probes the environment by striking a surface from two viewpoints. Next, over 1-million polygons were generated to form Roberto’s entire body with a custom 360-degree, three-dimensional full body scanner that mapped his x, y, and z coordinates for shape, and RGB coordinates for color.

    Rapido3D,
    www.rapido3d.co.uk

    Twelve motion capture cameras were installed for the dance sequence in a studio theatre at QMUL to create a large 3D capture space. Reflective optical markers were positioned at key points on Roberto's body: head, shoulders, torso, arms, hips, legs and feet. The tracking system maps these markers onto a skeleton, which is calibrated to the dimensions of Roberto's body, 120 times a second to produce high resolution, real-time tracking of points in 3D space. This turns the clouds of moving marker points into real-time 3D maps of the body positions, movements and joint rotations that make up the dance sequence.

    Queen Mary University of London,
    http://www.qmul.ac.uk

    Interaction Media and Communication, http://www.dcs.qmul.ac.uk/research/imc/

    Roberto’s 3D scan and motion capture footage was utilized by designers and researchers to interpolate the data and create visualization models. With this data imported and combined in Maya, Roberto’s form and motion was converted to a fluid map of responsive and adjustable pixel spheres, which brought his digital avatar to life. This avatar is displayed and controlled at different resolutions, ranging from a human constellation of twenty dots, to a full-resolution body that was created to portray the range of a liberated display system. Roberto’s pixel form expands through space, shifts forms and responds to environmental forces supplied by digital inputs.