Title: | HCI Datasets |
---|---|
Description: | A collection of datasets of human-computer interaction (HCI) experiments. Each dataset is from an HCI paper, with all fields described and the original publication linked. All paper authors of included data have consented to the inclusion of their data in this package. The datasets include data from a range of HCI studies, such as pointing tasks, user experience ratings, and steering tasks. Dataset sources: Bergström et al. (2022) <doi:10.1145/3490493>; Dalsgaard et al. (2021) <doi:10.1145/3489849.3489853>; Larsen et al. (2019) <doi:10.1145/3338286.3340115>; Lilija et al. (2019) <doi:10.1145/3290605.3300676>; Pohl and Murray-Smith (2013) <doi:10.1145/2470654.2481307>; Pohl and Mottelson (2022) <doi:10.3389/frvir.2022.719506>. |
Authors: | Henning Pohl [aut, cre] |
Maintainer: | Henning Pohl <[email protected]> |
License: | CC BY 4.0 |
Version: | 0.1.0 |
Built: | 2024-11-10 03:43:47 UTC |
Source: | https://github.com/henningpohl/hcidata |
Aggregated data from an experiment where participants used three different means of input to control a game. As established in previous work, the three means of input vary in objective sense of agency. This study collected subjective measures of agency, as well as subjective measures of user experiences for comparison.
AgencyUX
AgencyUX
A data frame with 126 observations of the following 26 variables:
Participant ID.
Participant age.
Participants' self-reported gender.
Which device was used (either touchpad, on-skin tapping, or button).
AttrakDiff (pragmatic) = "I found the device: confusing–structured" (7 point scale).
AttrakDiff (pragmatic) = "I found the device: impractical–practical" (7 point scale).
AttrakDiff (pragmatic) = "I found the device: complicated–simple" (7 point scale).
AttrakDiff (pragmatic) = "I found the device: unpredictable–predictable" (7 point scale).
AttrakDiff (hedonic) = "I found the device: dull–captivating" (7 point scale).
AttrakDiff (hedonic) = "I found the device: tacky–stylish" (7 point scale).
AttrakDiff (hedonic) = "I found the device: cheap–premium" (7 point scale).
AttrakDiff (hedonic) = "I found the device: unimaginative–creative" (7 point scale).
UMUX-LITE 1 = "This system's capabilities meet my requirements: strongly disagree–strongly agree" (7 point scale).
UMUX-LITE 2 = "This system is easy to use: strongly disagree–strongly agree" (7 point scale).
NASA-TLX (mental demand) = "How mentally demanding was the task? low–high" (21 point scale).
NASA-TLX (physical demand) = "How physically demanding was the task? low–high" (21 point scale).
NASA-TLX (temporal demand) = "How hurried or rushed was the pace of the task? low–high" (21 point scale).
NASA-TLX (performance) = "How successful were you in accomplishing what you were asked to do? low–high" (21 point scale).
NASA-TLX (effort) = "How hard did you have to work to accomplish your level of performance? low–high" (21 point scale).
NASA-TLX (frustration) = "How insecure, discouraged, irritated, stressed, and annoyed were you? low–high" (21 point scale).
Body Ownership = "It felt like the device I was using was part of my body: strongly disagree–strongly agree" (7 point scale).
Agency = "It felt like I was in control of the movements during the task: strongly disagree–strongly agree" (7 point scale).
Agency = "What is the degree of control you felt? lowest–highest" (7 point scale).
Agency = "Indicate how much it felt like pressing/tapping the button/touchpad/arm caused the space craft to shot: not at all–very much" (7 point scale).
Perception of task duration in seconds.
Hit percentage participants achieved when playing the game.
Bergström J, Knibbe J, Pohl H, Hornbæk K (2022). “Sense of Agency and User Experience: Is There a Link?” ACM Trans. Comput.-Hum. Interact., 29(4). ISSN 1073-0516, doi:10.1145/3490493.
Data from a study on casual interactions where participants had to move a ball from one side of a level to the other. They could use three different kinds of interaction to control the ball: (1) dragging via (2) direct touch, rate-controlled movement via hovering, and (3) fling gestures above the device. Depending on the levels' index of difficulty, the participants picked different interactions to solve the levels.
CasualSteering
CasualSteering
A data frame with 84 observations of the following 6 variables:
Participant ID.
Level ID.
Index of difficulty of the level.
Percentage share of touch interactions.
Percentage share of hover interactions.
Number of mid-air gestures performed by the participant.
Pohl H, Murray-Smith R (2013). “Focused and Casual Interactions: Allowing Users to Vary Their Level of Engagement.” In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '13, 2223–2232. ISBN 9781450318990, doi:10.1145/2470654.2481307.
Other mobile interaction:
HandSize
Data from a remote VR study where participants were tasked to keep their hand within boxes moving in front of them. They did so with three different textures for their hands: (1) green alien hands, (2) hands in their own skin tone, and (3) hands in a mismatched skin tone. After each trial, participants gave ratings on presence and the look of the hands.
HafniaHands
HafniaHands
A list with two entries:
participants with 5 fields for 112 study participants:
Participant ID.
Participant age.
Participants' self-reported sex.
Participants' amount of experience with VR.
Participants' skin tone on the Fitzpatrick scale.
responses with 9072 entries in 5 fields:
Participant ID.
Trial number.
Trial condition: Alien Hand, Matched Hand, or Mismatched Hand
Questionnaire item, which is one of:
"I felt that the movements of the virtual hands were caused by my own movements" (Banakou and Slater 2014)
"I felt that the virtual hands I saw were my own hands" (Banakou and Slater 2014)
"I felt that my virtual hands resembled my own (real) hands in terms of shape, skin tone, or other visual features" (Banakou and Slater 2014)
"Please rate the hands based on the opposing adjectives: Inanimate to Living" (Ho and MacDorman 2017)
"Please rate the hands based on the opposing adjectives: Synthetic to Real" (Ho and MacDorman 2017)
"Please rate the hands based on the opposing adjectives: Mechanical movement to Biological movement" (Ho and MacDorman 2017)
"Please rate the hands based on the opposing adjectives: Human-made to Human-like" (Ho and MacDorman 2017)
"Please rate the hands based on the opposing adjectives: Without definite lifespan to Mortal" (Ho and MacDorman 2017)
Aggregate of HQ0-HQ4
Participants' response on 7-point (-3 to 3) scale. For Humanness this is the average of HQ0-HQ4.
Pohl H, Mottelson A (2022). “Hafnia Hands: A Multi-Skin Hand Texture Resource for Virtual Reality Research.” Frontiers in Virtual Reality, 3. ISSN 2673-4192, doi:10.3389/frvir.2022.719506.
Other virtual reality:
VrPointing
Data from a study on the influence of hand size on touch accuracy. Contains hand measurements for 27 participants, information on the two phones used in the study, and 27000 recorded touch samples.
HandSize
HandSize
A list with three entries:
participants with 16 fields for 27 study participants:
Participant ID.
Participant age.
Participants' self-reported gender.
Participants' personal phone.
Participants' dominant hand.
Length of thumb in cm.
Length of index finger in cm.
Length of middle finger in cm.
Length of ring finger in cm.
Length of pinky finger in cm.
Width of thumb pad in cm.
Width of palm in cm.
Length of palm in cm.
Distance from index finger tip to base of thumb in cm.
Distance from thumb tip to index finger tip when hand is spread open in cm.
Distance from thumb tip to pinky tip when hand is spread open in cm.
devices with 5 fields:
Phone used (Android or iPhone).
Width of screen in px.
Height of screen in px.
Horizontal screen dpi.
Vertical screen dpi.
data with 27000 observations in 10 fields:
Participant ID.
Trial number.
Phone used (Android or iPhone).
Horizontal touch position in pixel.
Vertical touch position in pixel.
Horizontal target position in pixel.
Vertical target position in pixel.
Time it took to make the selection in seconds.
Offset from the target position in mm.
Whether this trial is considered an outlier because selection happened too fast or slow.
Larsen JN, Jacobsen TH, Boring S, Bergström J, Pohl H (2019). “The Influence of Hand Size on Touch Accuracy.” In Proceedings of the 21st International Conference on Human-Computer Interaction with Mobile Devices and Services, MobileHCI '19. ISBN 9781450368254, doi:10.1145/3338286.3340115.
Other mobile interaction:
CasualSteering
In this study, participants wore an AR headset and were asked to interact with objects occluded by a wall in front of them. They had to reach around to then manipulate the objects. They were supported in this task by several different kinds of visualization that showed them the object of interest. After each trial and at the end of the study, the participants provided ratings of each visualization as well as a ranking of the different visualizations.
OccludedInteraction
OccludedInteraction
A list with two entries:
participants with 6 fields for 24 study participants:
Participant ID.
Participant age.
Participants' self-reported gender.
Whether the participant wears glasses.
Participants' dominant hand.
Self-reported level of experience with AR on a 5-point scale ("none" to "a lot").
ratings with 6 fields:
Participant ID.
Block ID. 99 is used to denote final questionnaire after the study.
Occluded object that had to be used. Can be: button, dial, hdmi, hook, or slider.
Visualization available during the trial. Can be: none, static, dynamic, cloned, or see-though.
Which question was asked. Can be:
"Overall, I liked using the visualization when interacting with the object." ("Strongly disagree" to "Strongly agree")
"How well did the visualization support you during the task?" ("Strongly impeded me" to "Strongly supported me")
"I could easily manipulate the object." ("Strongly disagree" to "Strongly agree")
"I could easily check the state of the object." ("Strongly disagree" to "Strongly agree")
"How would you rank the five visualization with respect to how easy/hard they made it to interact with the object?"
"Please rate each view for how well it overall supported you during the study." ("Strongly impeded me" to "Strongly supported me")
Value between 0 and 6 for all ratings and 0 to 4 for the rankings.
Lilija K, Pohl H, Boring S, Hornbæk K (2019). “Augmented Reality Views for Occluded Interaction.” In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI '19, 1–12. ISBN 9781450359702, doi:10.1145/3290605.3300676.
Data from a study where participants pointed at one of 27 targets in the space in front of them. This version contains calibration poses, participant information, end the final pose for each pointing trial. The full dataset with all movement within each trial is available at https://github.com/TorSalve/pointing-data-ATHCC.
VrPointing
VrPointing
A list with three entries:
participants with 12 fields for 13 study participants:
Participant ID.
Participants' dominant hand.
Participants' self-reported gender.
Participant age.
Length of forearm in meter.
Distance from the forearm marker to the elbow in meter.
Index finger length in meter.
Length of the upper arm in meter.
Distance from the upper arm marker to the elbow in meter.
Participant height in meter.
Horizontal distance from the right shoulder marker to the participants' shoulder in meter.
Vertical distance from the right shoulder marker to the participants' shoulder in meter.
calibration with 44 fields for 39 observations:
Participant ID.
Calibration pose, where 1 = arms pointing down, 2 = arm pointing to the right, and 3 = arm pointing forward.
Index finger position in meter.
Hand position in meter.
Forearm position in meter.
Upper arm position in meter.
Right shoulder position in meter.
Headset position in meter.
Left Shoulder position in meter.
Index finger orientation in radians.
Hand orientation in radians.
Forearm orientation in radians.
Upper arm orientation in radians.
Right shoulder orientation in radians.
Headset orientation in radians.
Left shoulder orientation in radians.
pointing with 48 fields for 1755 observations:
Participant ID.
Trial number.
Time since beginning of trial in seconds.
Index finger position in meter.
Hand position in meter.
Forearm position in meter.
Upper arm position in meter.
Right shoulder position in meter.
Headset position in meter.
Left Shoulder position in meter.
Index finger orientation in radians.
Hand orientation in radians.
Forearm orientation in radians.
Upper arm orientation in radians.
Right shoulder orientation in radians.
Headset orientation in radians.
Left shoulder orientation in radians.
Target position in meter.
Dalsgaard T, Knibbe J, Bergström J (2021). “Modeling Pointing for 3D Target Selection in VR.” In Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology, VRST '21. ISBN 9781450390927, doi:10.1145/3489849.3489853.
Other virtual reality:
HafniaHands