Package 'hcidata'

Title: HCI Datasets
Description: A collection of datasets of human-computer interaction (HCI) experiments. Each dataset is from an HCI paper, with all fields described and the original publication linked. All paper authors of included data have consented to the inclusion of their data in this package. The datasets include data from a range of HCI studies, such as pointing tasks, user experience ratings, and steering tasks. Dataset sources: Bergström et al. (2022) <doi:10.1145/3490493>; Dalsgaard et al. (2021) <doi:10.1145/3489849.3489853>; Larsen et al. (2019) <doi:10.1145/3338286.3340115>; Lilija et al. (2019) <doi:10.1145/3290605.3300676>; Pohl and Murray-Smith (2013) <doi:10.1145/2470654.2481307>; Pohl and Mottelson (2022) <doi:10.3389/frvir.2022.719506>.
Authors: Henning Pohl [aut, cre]
Maintainer: Henning Pohl <[email protected]>
License: CC BY 4.0
Version: 0.1.0
Built: 2024-11-10 03:43:47 UTC
Source: https://github.com/henningpohl/hcidata

Help Index


Sense of Agency and User Experience

Description

Aggregated data from an experiment where participants used three different means of input to control a game. As established in previous work, the three means of input vary in objective sense of agency. This study collected subjective measures of agency, as well as subjective measures of user experiences for comparison.

Usage

AgencyUX

Format

A data frame with 126 observations of the following 26 variables:

ID

Participant ID.

Age

Participant age.

Gender

Participants' self-reported gender.

Condition

Which device was used (either touchpad, on-skin tapping, or button).

AttDiff1

AttrakDiff (pragmatic) = "I found the device: confusing–structured" (7 point scale).

AttDiff2

AttrakDiff (pragmatic) = "I found the device: impractical–practical" (7 point scale).

AttDiff3

AttrakDiff (pragmatic) = "I found the device: complicated–simple" (7 point scale).

AttDiff4

AttrakDiff (pragmatic) = "I found the device: unpredictable–predictable" (7 point scale).

AttDiff5

AttrakDiff (hedonic) = "I found the device: dull–captivating" (7 point scale).

AttDiff6

AttrakDiff (hedonic) = "I found the device: tacky–stylish" (7 point scale).

AttDiff7

AttrakDiff (hedonic) = "I found the device: cheap–premium" (7 point scale).

AttDiff8

AttrakDiff (hedonic) = "I found the device: unimaginative–creative" (7 point scale).

Umux1

UMUX-LITE 1 = "This system's capabilities meet my requirements: strongly disagree–strongly agree" (7 point scale).

Umux2

UMUX-LITE 2 = "This system is easy to use: strongly disagree–strongly agree" (7 point scale).

NASA1

NASA-TLX (mental demand) = "How mentally demanding was the task? low–high" (21 point scale).

NASA2

NASA-TLX (physical demand) = "How physically demanding was the task? low–high" (21 point scale).

NASA3

NASA-TLX (temporal demand) = "How hurried or rushed was the pace of the task? low–high" (21 point scale).

NASA4

NASA-TLX (performance) = "How successful were you in accomplishing what you were asked to do? low–high" (21 point scale).

NASA5

NASA-TLX (effort) = "How hard did you have to work to accomplish your level of performance? low–high" (21 point scale).

NASA6

NASA-TLX (frustration) = "How insecure, discouraged, irritated, stressed, and annoyed were you? low–high" (21 point scale).

Ownership

Body Ownership = "It felt like the device I was using was part of my body: strongly disagree–strongly agree" (7 point scale).

Agency1

Agency = "It felt like I was in control of the movements during the task: strongly disagree–strongly agree" (7 point scale).

Agency2

Agency = "What is the degree of control you felt? lowest–highest" (7 point scale).

Agency3

Agency = "Indicate how much it felt like pressing/tapping the button/touchpad/arm caused the space craft to shot: not at all–very much" (7 point scale).

TimePerception

Perception of task duration in seconds.

HitRate

Hit percentage participants achieved when playing the game.

Source

Bergström J, Knibbe J, Pohl H, Hornbæk K (2022). “Sense of Agency and User Experience: Is There a Link?” ACM Trans. Comput.-Hum. Interact., 29(4). ISSN 1073-0516, doi:10.1145/3490493.


Casual Interaction Steering Study

Description

Data from a study on casual interactions where participants had to move a ball from one side of a level to the other. They could use three different kinds of interaction to control the ball: (1) dragging via (2) direct touch, rate-controlled movement via hovering, and (3) fling gestures above the device. Depending on the levels' index of difficulty, the participants picked different interactions to solve the levels.

Usage

CasualSteering

Format

A data frame with 84 observations of the following 6 variables:

PID

Participant ID.

level

Level ID.

difficulty

Index of difficulty of the level.

touch

Percentage share of touch interactions.

hover

Percentage share of hover interactions.

gestures

Number of mid-air gestures performed by the participant.

Source

Pohl H, Murray-Smith R (2013). “Focused and Casual Interactions: Allowing Users to Vary Their Level of Engagement.” In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '13, 2223–2232. ISBN 9781450318990, doi:10.1145/2470654.2481307.

See Also

Other mobile interaction: HandSize


Hafnia Hands study on presence with different hand textures in virtual reality

Description

Data from a remote VR study where participants were tasked to keep their hand within boxes moving in front of them. They did so with three different textures for their hands: (1) green alien hands, (2) hands in their own skin tone, and (3) hands in a mismatched skin tone. After each trial, participants gave ratings on presence and the look of the hands.

Usage

HafniaHands

Format

A list with two entries:

participants with 5 fields for 112 study participants:

pid

Participant ID.

Age

Participant age.

Sex

Participants' self-reported sex.

VR Experience

Participants' amount of experience with VR.

Skin Tone

Participants' skin tone on the Fitzpatrick scale.

responses with 9072 entries in 5 fields:

pid

Participant ID.

Trial

Trial number.

Condition

Trial condition: Alien Hand, Matched Hand, or Mismatched Hand

Measure

Questionnaire item, which is one of:

Agency

"I felt that the movements of the virtual hands were caused by my own movements" (Banakou and Slater 2014)

Body Ownership

"I felt that the virtual hands I saw were my own hands" (Banakou and Slater 2014)

Resemblance

"I felt that my virtual hands resembled my own (real) hands in terms of shape, skin tone, or other visual features" (Banakou and Slater 2014)

HQ0

"Please rate the hands based on the opposing adjectives: Inanimate to Living" (Ho and MacDorman 2017)

HQ1

"Please rate the hands based on the opposing adjectives: Synthetic to Real" (Ho and MacDorman 2017)

HQ2

"Please rate the hands based on the opposing adjectives: Mechanical movement to Biological movement" (Ho and MacDorman 2017)

HQ3

"Please rate the hands based on the opposing adjectives: Human-made to Human-like" (Ho and MacDorman 2017)

HQ4

"Please rate the hands based on the opposing adjectives: Without definite lifespan to Mortal" (Ho and MacDorman 2017)

Humanness

Aggregate of HQ0-HQ4

Response

Participants' response on 7-point (-3 to 3) scale. For Humanness this is the average of HQ0-HQ4.

Source

Pohl H, Mottelson A (2022). “Hafnia Hands: A Multi-Skin Hand Texture Resource for Virtual Reality Research.” Frontiers in Virtual Reality, 3. ISSN 2673-4192, doi:10.3389/frvir.2022.719506.

See Also

Other virtual reality: VrPointing


Touch Performance by Hand Size

Description

Data from a study on the influence of hand size on touch accuracy. Contains hand measurements for 27 participants, information on the two phones used in the study, and 27000 recorded touch samples.

Usage

HandSize

Format

A list with three entries:

participants with 16 fields for 27 study participants:

PID

Participant ID.

Age

Participant age.

Gender

Participants' self-reported gender.

Phone

Participants' personal phone.

Handedness

Participants' dominant hand.

ThumbLength

Length of thumb in cm.

IndexLength

Length of index finger in cm.

MiddleLength

Length of middle finger in cm.

RingLength

Length of ring finger in cm.

PinkyLength

Length of pinky finger in cm.

ThumbPadWidth

Width of thumb pad in cm.

PalmWidth

Width of palm in cm.

PalmLength

Length of palm in cm.

IndexThumbLength

Distance from index finger tip to base of thumb in cm.

ThumbIndexSpan

Distance from thumb tip to index finger tip when hand is spread open in cm.

ThumbPinkySpan

Distance from thumb tip to pinky tip when hand is spread open in cm.

devices with 5 fields:

Phone

Phone used (Android or iPhone).

ScreenWidth

Width of screen in px.

ScreenHeight

Height of screen in px.

ScreenDpiX

Horizontal screen dpi.

ScreenDpiY

Vertical screen dpi.

data with 27000 observations in 10 fields:

PID

Participant ID.

Trial

Trial number.

Phone

Phone used (Android or iPhone).

TouchX

Horizontal touch position in pixel.

TouchY

Vertical touch position in pixel.

TargetX

Horizontal target position in pixel.

TargetY

Vertical target position in pixel.

SelectionTimeSeconds

Time it took to make the selection in seconds.

ErrorMM

Offset from the target position in mm.

Outlier

Whether this trial is considered an outlier because selection happened too fast or slow.

Source

Larsen JN, Jacobsen TH, Boring S, Bergström J, Pohl H (2019). “The Influence of Hand Size on Touch Accuracy.” In Proceedings of the 21st International Conference on Human-Computer Interaction with Mobile Devices and Services, MobileHCI '19. ISBN 9781450368254, doi:10.1145/3338286.3340115.

See Also

Other mobile interaction: CasualSteering


Acquisition study for occluded interaction

Description

In this study, participants wore an AR headset and were asked to interact with objects occluded by a wall in front of them. They had to reach around to then manipulate the objects. They were supported in this task by several different kinds of visualization that showed them the object of interest. After each trial and at the end of the study, the participants provided ratings of each visualization as well as a ranking of the different visualizations.

Usage

OccludedInteraction

Format

A list with two entries:

participants with 6 fields for 24 study participants:

user

Participant ID.

age

Participant age.

gender

Participants' self-reported gender.

glasses

Whether the participant wears glasses.

handedness

Participants' dominant hand.

ar_experience

Self-reported level of experience with AR on a 5-point scale ("none" to "a lot").

ratings with 6 fields:

user

Participant ID.

block

Block ID. 99 is used to denote final questionnaire after the study.

object

Occluded object that had to be used. Can be: button, dial, hdmi, hook, or slider.

view

Visualization available during the trial. Can be: none, static, dynamic, cloned, or see-though.

question

Which question was asked. Can be:

liked overall

"Overall, I liked using the visualization when interacting with the object." ("Strongly disagree" to "Strongly agree")

supported

"How well did the visualization support you during the task?" ("Strongly impeded me" to "Strongly supported me")

manipulate

"I could easily manipulate the object." ("Strongly disagree" to "Strongly agree")

check state

"I could easily check the state of the object." ("Strongly disagree" to "Strongly agree")

ranking

"How would you rank the five visualization with respect to how easy/hard they made it to interact with the object?"

rating

"Please rate each view for how well it overall supported you during the study." ("Strongly impeded me" to "Strongly supported me")

rating

Value between 0 and 6 for all ratings and 0 to 4 for the rankings.

Source

Lilija K, Pohl H, Boring S, Hornbæk K (2019). “Augmented Reality Views for Occluded Interaction.” In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI '19, 1–12. ISBN 9781450359702, doi:10.1145/3290605.3300676.


Pointing in Virtual Reality

Description

Data from a study where participants pointed at one of 27 targets in the space in front of them. This version contains calibration poses, participant information, end the final pose for each pointing trial. The full dataset with all movement within each trial is available at https://github.com/TorSalve/pointing-data-ATHCC.

Usage

VrPointing

Format

A list with three entries:

participants with 12 fields for 13 study participants:

pid

Participant ID.

handedness

Participants' dominant hand.

gender

Participants' self-reported gender.

age

Participant age.

forearmLength

Length of forearm in meter.

forearmMarkerDist

Distance from the forearm marker to the elbow in meter.

indexFingerLength

Index finger length in meter.

upperArmLength

Length of the upper arm in meter.

upperArmMarkerDist

Distance from the upper arm marker to the elbow in meter.

height

Participant height in meter.

rightShoulderMarkerDist.X

Horizontal distance from the right shoulder marker to the participants' shoulder in meter.

rightShoulderMarkerDist.Y

Vertical distance from the right shoulder marker to the participants' shoulder in meter.

calibration with 44 fields for 39 observations:

pid

Participant ID.

pose

Calibration pose, where 1 = arms pointing down, 2 = arm pointing to the right, and 3 = arm pointing forward.

indexFinger.X, indexFinger.Y, indexFinger.Z

Index finger position in meter.

hand.X, hand.Y, hand.Z

Hand position in meter.

forearm.X, forearm.Y, forearm.Z

Forearm position in meter.

upperArm.X, upperArm.Y, upperArm.Z

Upper arm position in meter.

rightShoulder.X, rightShoulder.Y, rightShoulder.Z

Right shoulder position in meter.

hmd.X, hmd.Y, hmd.Z

Headset position in meter.

leftShoulder.X, leftShoulder.Y, leftShoulder.Z

Left Shoulder position in meter.

indexFingerO.X, indexFingerO.Y, indexFingerO.Z

Index finger orientation in radians.

handO.X, handO.Y, handO.Z

Hand orientation in radians.

forearmO.X, forearmO.Y, forearmO.Z

Forearm orientation in radians.

upperArmO.X, upperArmO.Y, upperArmO.Z

Upper arm orientation in radians.

rightShoulderO.X, rightShoulderO.Y, rightShoulderO.Z

Right shoulder orientation in radians.

hmdO.X, hmdO.Y, hmdO.Z

Headset orientation in radians.

leftShoulderO.X, leftShoulderO.Y, leftShoulderO.Z

Left shoulder orientation in radians.

pointing with 48 fields for 1755 observations:

pid

Participant ID.

trial

Trial number.

time

Time since beginning of trial in seconds.

indexFinger.X, indexFinger.Y, indexFinger.Z

Index finger position in meter.

hand.X, hand.Y, hand.Z

Hand position in meter.

forearm.X, forearm.Y, forearm.Z

Forearm position in meter.

upperArm.X, upperArm.Y, upperArm.Z

Upper arm position in meter.

rightShoulder.X, rightShoulder.Y, rightShoulder.Z

Right shoulder position in meter.

hmd.X, hmd.Y, hmd.Z

Headset position in meter.

leftShoulder.X, leftShoulder.Y, leftShoulder.Z

Left Shoulder position in meter.

indexFingerO.X, indexFingerO.Y, indexFingerO.Z

Index finger orientation in radians.

handO.X, handO.Y, handO.Z

Hand orientation in radians.

forearmO.X, forearmO.Y, forearmO.Z

Forearm orientation in radians.

upperArmO.X, upperArmO.Y, upperArmO.Z

Upper arm orientation in radians.

rightShoulderO.X, rightShoulderO.Y, rightShoulderO.Z

Right shoulder orientation in radians.

hmdO.X, hmdO.Y, hmdO.Z

Headset orientation in radians.

leftShoulderO.X, leftShoulderO.Y, leftShoulderO.Z

Left shoulder orientation in radians.

target.X, target.Y, target.Z

Target position in meter.

Source

Dalsgaard T, Knibbe J, Bergström J (2021). “Modeling Pointing for 3D Target Selection in VR.” In Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology, VRST '21. ISBN 9781450390927, doi:10.1145/3489849.3489853.

See Also

Other virtual reality: HafniaHands