Shape


Designing the future of music curation.






Speculative Design Module – Part 1 / 2
Royal College of Art / Imperial College London
(February 2020)
Features:

Museum of Design Atlanta (MODA) – opened April 2021
The Future Happened: Designing the Future of Music
[ view digital exhibition ]

San Francisco Design Week – Summer 2020
[ SF Design Week - Designing the Future of Music ]
[ LAD Design - Designing the Future of Music ]



Collaboration between RCA/Imperial x Lawrence Azerrad of LAD Design
AEG - UK x BST Hyde Park 2020 (cancelled) x Various Music Industry Guests

Team:
Maraid McEwan - LinkedIn
Seetharaman Subramanian - Website | LinkedIn
Nikolas Grafakos - LinkedIn











Shape enables people to create an expression of their musical persona; combining both your music listening history and your physiological reactions to curate how you listen in the future.



Only a three week design sprint, our cohort of students at the Royal College of Art and Imperial College worked in small teams to look into the future of music—music listening, musical performances, music experiences, and music technology. Organized by Lawrence Azerrad and partnering with AEG - UK / BST Hyde Park, our goal was to research, ideate, test our new technologies, and present a new vision of the future of music.





Pitch video for Shape





Design brief.


Design and develop an experiential solution for the future of music. Your proposition should be part of a broader service, product or experience - but you will focus primarily on the design and build of the human-music interaction at the centre of your proposition.

This project happened pre-COVID19, but still took a look into the future, as an opportunity to reimagine the possibilities of social experiences, live performance, and sensory expression.




Highlights of the collaboration and end products from the charette.
Video by Lawrence Azerrad and AEG





The big question.


What kind of
music do you
listen to?






As we took a dive into our explorations, we found ourselves coming back to one simple question that’s been plaguing people for years: what kind of music do you listen to? Broken down, your music taste is defined by two things: what you already know and listen to, and how you feel in the moment while you’re listening to music. This prompted our main driving question:

How might we enable people to connect through expression of their musical persona and instinct? By helping them to create a musical identity through instinct & history.










Measuring instinctual reactions.


Cognitive stimulus: Measuring a user’s immediate reaction to physical stimulus, translating data to measurable metrics which can then be manifested. Ex. heart rate, brain activity, skin temperature, blood pressure, goosebumps, micro-expressions, memories and associations.

Boiled down, we found quickly that a reliable and testable way of measuring someone’s reaction to music is through heart rate and galvanic skin response (measured through your finger tips).

Building our own set of sensors, using simple arduino pieces, we were able to pull reliable readings (as reliable as we could as design students, and from only a few day’s work) from classmates that we used in our experiment that you’ll see below. We combined these readings with part 2 of our research: measuring your music listening history.




Prototyping and user testing process




Understanding music listening history:


Historical Interactions: Referencing the user’s musical history through existing platforms to look at multiple layers of metrics and data. Ex. history, genre, artist, listening time, skip rate, bpm, frequencies, variety.

Our goal was to access Spotify’s API, to see if we could identify patterns between songs people listen to and the underlying traits of the songs—essentially attempting to understand how Spotify’s algorithm works. It’s stupidly fascinating but, without getting too deep, Spotify categorizes songs based off certain metrics (mentioned in the previous paragraph and all publicly available) and serves you new songs every week that match those metrics.





A simplified diagram showing our plan to replicate the Spotify algorithm in real-time with users.



We wanted to know if we could manipulate that process, but instead of using previous songs you’ve listened to, we wanted to use emotion and instinctual response to be that catalyst—resulting in the framework pictured above.

Our goal was to play the user a specific song (based off of the songs they listen to or enjoy), and then—based off their physical response to the first song—play them another song that would increase their mood or decrease their mood. Then replicate it a second time immediately afterwards.

Crazy thing is, we did.





Faking the algorithm.


Once Seetha and I got access to the Spotify API, we set to work with our experiment: utilizing our classmates, playing a song they like, taking a reading, then essentially faking the Spotify algorithm and playing new songs they would like—songs with similar back-end readings—and seeing if the readings matched, and most often they did.





Behind the scenes of our experimentation, attempting to fake the Spotify algorithm based off our users physical responses to songs, utilizing back-end access to the Spotify API and custom arduino sensors.





A quick time-lapse video showing our process of setting up the experiment and testing our fellow classmates.





Finding results.


Measuring each person’s GSM (galvanic skin measurement) response—accessed through a custom python script developed by Seetha, we were able to see results between their initial song choice and the two additional songs that we selected based off their initial reading.

We were able to both positively and negatively influence their arousal and measure their response, which was really amazing.*

*Obviously, these were not 100% reliable and we are not trained scientists. But we were able to see enough of a correlation to bee optimistic about our proposal. There are already many wearables, like the Apple Watch or Fit-bit for example, that take daily, reliable readings of these same measurements that could provide much more accurate results.





An example of results from one of our users in the testing process, the X axis is measuring time in seconds, and the Y axis is measuring GSM Response.





Final concept.


Shape is the manifestation of our concept, joining your music listening history with your instinctual reactions, real-time. We believe that combining these two areas of expression will create a more intimate and accurate representation of your identity, which can be visualized, utilized as a tool, and communicated to others.

In this future space, we imagine a world where wearables are monitoring our daily life, and can be easily integrated into the Shape platform—essentially introducing a new metric to drive your daily listening.



Final concept poster
View high resolution version pdf
[ here. ]






User agency & control.


Obviously, this could seem a bit overbearing, and a bit “Big Brother” in nature, which is why we were sensitive to the levels of agency and control that you need to have when using Shape. As you likely saw in the video, there are three levels of interaction built into Shape: automated, suggested, and self-controlled.







Automated:

If you give Shape full control, it will run seamlessly in the background with your music listening platform of choice, monitoring your real-time reactions and adjusting your music accordingly.


Suggested:

If you would like a bit more control, allow shape to monitor how you’re feeling and it will provide suggestions for music that fits your response. If you’re feeling down, Shape can provide suggestions to lift you up, or provide suggestions to let you stay in whatever funk you’re in, helping you process whatever it is you’re going through.


Self-controlled:

Finally, when it comes down to it, your music is exactly that—your music. Turn off shape and listen to the things that you want to listen to, no questions asked.





Visualizing & sharing.


Lastly, to tie everything together, Shape is not only a system to help you more intimately connect with your music, but also to help you connect with other listeners, compare individual shapes, and help you discover new music.

We’ve broken down the visualization to a few key parts, all based off of your interactions with your music listening platform of choice.





An example of a user’s personalized shape




Color:

Color simply represents individual music genres.


Form:

The changing form represents the amount of time you spend listening to a genre. The bigger it gets the more time you’ve spent listening to it. 


Texture:

Indicated the variety of artists in a genre. A more densely packed texture represents a wider variety in the number of artists you have listened to in that specific genre.


Orientation:

The placement of the forms on the map—as inspired by Silvan Tomkins’ work on emotional categorization—indicate the emotion that you have associated with each genre. With the understanding that it will grow and change as time passes.





Defining your shape.


Above are the building blocks that define how your shape takes form, each being informed by the inflow of data to your personal device / monitoring device, and then brought to life. Our hope is that Shape will become a new abstract representation of your musical identity, one that changes with you, as your music taste evolves. That makes it unique and entirely yours, and we see Shape becoming more than just a pretty picture—a tool for curation, an excuse for immersion, and an aid for expression.





three core
principles of shape:






1. Curation.


Shape could seamlessly integrate and work with the current music listening platforms and a connected smart device, allowing for a richer and more engaging music listening experience. You could utilize shape as a tool to dive in—explore the things you listen to, and find new music you may have never seen before.







2. Immersion.


In working with AEG and the BST Music Festival, we imagined Shape being used to create immersive experiences that engage with diverse audiences. That could be through a personalized experience, one on one, with your music through an interactive booth—giving the festival goer a gift to take with and share. Or it could be a fully immersive experience, utilizing the bio-feedback, from all the users who take part, to create a changing and evolving visualization of the current mood of the festival.







3. Expression.


Lastly, Shape will enable you to communicate and express your musical identity to your friends and family. Sharing and accessing your friends individual shape will allow you to see what they listen to, how they’re listening, and how it makes them feel—but only as much as they want you to know.








Narrative & video.


Our final deliverable for this project was a 1 minute pitch video, followed by a 5 minute pitch presentation, so we wanted to hit the main points as simply and straightforward as possible.

We worked as a team to refine our message, then broke up into teams to get the things we needed. Seetha and Maraid refined the script, wrote the narrative, recorded the voice-over, and finished the presentations. Nik created on all the animations, learning AfterEffects from scratch, and helped with my work filming interviews, editing footage, and producing the final video you can watch at the top of the page.




Conclusion.


Defining for the future of music was an absolutely fascinating experience. Looking at the nature of the music industry, and then attempting to create and test a new application of technology to the space was a challenge, definitely. But I feel like we found something that could be developed in the future, but also something to be implemented today. Excited to see where this could lead. 

And lastly a BIG THANK YOU to my amazing team, we put out a crazy amount of work for only three weeks of work.



From left to right: Maraid, Me, Nikolas, and Seetha.




...