Snapstream: Snapshot-based Interaction in Live Streaming for Visual Art
Saelyne Yang
KAIST
Changyoon Lee
KAIST
Hijung Valentina Shin
Adobe Research
Juho Kim
KAIST
Abstract
Live streaming visual art such as drawing or using design software is gaining popularity. An important aspect of live streams is the direct and real-time communication between streamers and viewers. However, currently available text-based interaction limits the expressiveness of viewers as well as streamers, especially when they refer to specific moments or objects in the stream. To investigate the feasibility of using snapshots of streamed content as a way to enhance streamer-viewer interaction, we introduce Snapstream, a system that allows users to take snapshots of the live stream, annotate them, and share the annotated snapshots in the chat. Streamers can also verbally reference a specific snapshot during streaming to respond to viewers’ questions or comments. Results from live deployments show that participants communicate more expressively and clearly with increased engagement using Snapstream. Participants used snapshots to reference part of the artwork, give suggestions on it, make fun images or memes, and log intermediate milestones. Our findings suggest that visual interaction enables richer experiences in live streaming.
System
(a) The video player. (b) Users can share a snapshot in chat. (c) Streamers can refer to and answer a specific snapshot using voice. When the streamer mentions certain snapshot by voice, it will be highlighted for all users. (d) Users can mention a specific snapshot. (e) Users can take a snapshot and (f) filter the chat.
(a) Users can crop and annotate the snapshot with text and shapes. (b) Users can choose the shape of the mark and its color.
Results
Different use cases of snapshots in the studies. (a)-top: a user annotated on the image with the red circle (snapshots cropped to fit the table). (a)-bottom: a user cropped the image. (b)-top: a user added glasses and mouth. (b)-bottom: a user added a background object with red rectangles (snapshots cropped to fit the table). (c)-top: a user added an antenna. (c)-bottom: a user added the text. (d)-top: a user cropped the image. (d)-bottom: a user cropped the image.
Participants communicate more expressively and clearly with increased engagement using Snapstream.
Paper
Bibtex
 
          @inproceedings{10.1145/3313831.3376390,
          author = {Yang, Saelyne and Lee, Changyoon and Shin, Hijung Valentina and Kim, Juho},
          title = {Snapstream: Snapshot-Based Interaction in Live Streaming for Visual Art},
          year = {2020},
          isbn = {9781450367080},
          publisher = {Association for Computing Machinery},
          address = {New York, NY, USA},
          url = {https://doi.org/10.1145/3313831.3376390},
          doi = {10.1145/3313831.3376390},
          booktitle = {Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems},
          pages = {1–12},
          numpages = {12},
          keywords = {live streaming, online interaction, chat interaction, context sharing},
          location = {Honolulu, HI, USA},
          series = {CHI ’20}
          }
        
Acknowledgments
The authors would like to thank Howard Pinsky, Terry White, and Paul Trani at Adobe for their kind advice and discussions. This research was supported by Next-Generation Information Computing Development Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Science and ICT (2017M3C4A7065960). Saelyne Yang is supported by the Kwanjeong Educational Foundation Scholarship.