Teachers' in-the-moment support is a limited resource in technology-supported classrooms, and teachers must decide whom to help and when during ongoing student work. However, less is known about how students' prior help history (whether they were helped earlier) and their engagement states (e.g., idle, struggle) shape teachers' decisions, and whether observed learning benefits associated with teacher help extend beyond the current class session. To address these questions, we first conducted interviews with nine K-12 mathematics teachers to identify candidate decision factors for teacher help. We then analyzed 1.4 million student-system interactions from 339 students across 14 classes in the MATHia intelligent tutoring system by linking teacher-logged help events with fine-grained engagement states. Mixed-effects models show that students who received help earlier were more likely to receive additional help later, even after accounting for current engagement state. Cross-lagged panel analyses further show that teacher help recurred across sessions, whereas idle behavior did not receive sustained attention over time. Finally, help coincided with immediate learning within sessions, b
Help facilities have been crucial in helping users learn about software for decades. But despite widespread prevalence of game engines and game editors that ship with many of today's most popular games, there is a lack of empirical evidence on how help facilities impact game-making. For instance, certain types of help facilities may help users more than others. To better understand help facilities, we created game-making software that allowed us to systematically vary the type of help available. We then ran a study of 1646 participants that compared six help facility conditions: 1) Text Help, 2) Interactive Help, 3) Intelligent Agent Help, 4) Video Help, 5) All Help, and 6) No Help. Each participant created their own first-person shooter game level using our game-making software with a randomly assigned help facility condition. Results indicate that Interactive Help has a greater positive impact on time spent, controls learnability, learning motivation, total editor activity, and game level quality. Video Help is a close second across these same measures.
The Gaussian wiretap channel with rate-limited help, available at the legitimate receiver (Rx) or/and transmitter (Tx), is studied under various channel configurations (degraded, reversely degraded and non-degraded). In the case of Rx help and all channel configurations, the rate-limited help results in a secrecy capacity boost equal to the help rate irrespective of whether the help is secure or not, so that the secrecy of help does not provide any capacity increase. The secrecy capacity is positive for the reversely-degraded channel (where the no-help secrecy capacity is zero) and no wiretap coding is needed to achieve it. More noise at the legitimate receiver can sometimes result in higher secrecy capacity. The secrecy capacity with Rx help is not increased even if the helper is aware of the message being transmitted. The same secrecy capacity boost also holds if non-secure help is available to the transmitter (encoder), in addition to or instead of the same Rx help, so that, in the case of the joint Tx/Rx help, one help link can be omitted without affecting the capacity. If Rx/Tx help links are independent of each other, then the boost in the secrecy capacity is the sum of help
The complexity of navigating digital privacy, safety, and security threats often falls directly on users. This leads to users seeking help from family and peers, platforms and advice guides, dedicated communities, and even large language models (LLMs). As a precursor to improving resources across this ecosystem, our community needs to understand what help seeking looks like in the wild. To that end, we blend qualitative coding with LLM fine-tuning to sift through over one billion Reddit posts from the last four years to identify where and for what users seek digital privacy, safety, or security help. We isolate three million relevant posts with 93% precision and recall and automatically annotate each with the topics discussed (e.g., security tools, privacy configurations, scams, account compromise, content moderation, and more). We use this dataset to understand the scope and scale of help seeking, the communities that provide help, and the types of help sought. Our work informs the development of better resources for users (e.g., user guides or LLM help-giving agents) while underscoring the inherent challenges of supporting users through complex combinations of threats, platforms,
Blind and low-vision (BLV) people face many challenges when venturing into public environments, often wishing it were easier to get help from people nearby. Ironically, while many sighted individuals are willing to help, such interactions are infrequent. Asking for help is socially awkward for BLV people, and sighted people lack experience in helping BLV people. Through a mixed-ability research-through-design process, we explore four diverse approaches toward how assistive technology can serve as help supporters that collaborate with both BLV and sighted parties throughout the help process. These approaches span two phases: the connection phase (finding someone to help) and the collaboration phase (facilitating help after finding someone). Our findings from a 20-participant mixed-ability study reveal how help supporters can best facilitate connection, which types of information they should present during both phases, and more. We discuss design implications for future approaches to support face-to-face help.
Artificial intelligence (AI) assistants are increasingly embedded in workplace tools, raising the question of how initiative-taking shapes adoption. Prior work highlights trust and expectation mismatches as barriers, but the underlying psychological mechanisms remain unclear. Drawing on self-affirmation and social exchange theories, we theorize that unsolicited help elicits self-threat, thereby reducing willingness to accept help, likelihood of future use, and performance expectancy of AI. We report two vignette-based experiments (Study~1: $N=761$; Study~2: $N=571$, preregistered). Study~1 compared anticipatory and reactive help provided by an AI vs. a human, while Study~2 distinguished between \emph{offering} (suggesting help) and \emph{providing} (acting automatically). In Study 1, AI reactive help was more threatening than reactive human help. Across both studies, anticipatory help increased user's self-threat and reduced adoption outcomes. Our findings identify self-threat as a mechanism through which anticipatory help, a proactive AI feature, may backfire, and suggest design implications to be tested in interactive systems.
Generalized reciprocity -- the tendency to help others after receiving help oneself -- is widely theorized as a mechanism sustaining cooperation on online knowledge-sharing platforms. Yet robust empirical evidence from field settings remains surprisingly scarce. Prior studies relying on survey self-reports struggle to distinguish reciprocity from other prosocial motives, while observational designs confound reciprocity with baseline user activity, producing upward-biased estimates. We address these empirical challenges by developing a matched difference-in-differences survival analysis that leverages the temporal structure of help-seeking and help-giving on Stack Overflow. Using Cox proportional hazards models on over 21 million questions, we find that receiving an answer significantly increases a user's propensity to help others, but this effect is concentrated among newcomers and declines with platform experience. This pattern suggests that reciprocity functions primarily as a contributor-recruitment mechanism, operating before platform-specific incentives such as reputation and status displace the general moral impulse to reciprocate. Response time moderates the effect, but non-
People frequently use online forums to get help from experts to answer questions about feature-rich software. However, they may have to wait minutes, hours, or even days to receive advice. We investigate the potential to leverage experts to provide quicker help. We collected over 200 questions from online forums for two feature-rich software applications and suspected a quarter were short enough to be answered in less than one minute (defined as nanoquestions). We then conducted a study with 28 experts recruited from help forums to confirm this assumption, and explore whether there was a preference between text and audio answers. For more than half of the nanoquestions participants saw, they could give advice that they believed was helpful in under 60 seconds. Finally, we collected feedback about what makes a question quick to answer to inspire the design of future tools for ultra rapid human-to-human help.
Mental health disorders are a global crisis. While various datasets exist for detecting such disorders, there remains a critical gap in identifying individuals actively seeking help. This paper introduces a novel dataset, M-Help, specifically designed to detect help-seeking behavior on social media. The dataset goes beyond traditional labels by identifying not only help-seeking activity but also specific mental health disorders and their underlying causes, such as relationship challenges or financial stressors. AI models trained on M-Help can address three key tasks: identifying help-seekers, diagnosing mental health conditions, and uncovering the root causes of issues.
Help-seeking is a critical way for students to learn new concepts, acquire new skills, and get unstuck when problem-solving in their computing courses. The recent proliferation of generative AI tools, such as ChatGPT, offers students a new source of help that is always available on-demand. However, it is unclear how this new resource compares to existing help-seeking resources along dimensions of perceived quality, latency, and trustworthiness. In this paper, we investigate the help-seeking preferences and experiences of computing students now that generative AI tools are available to them. We collected survey data (n=47) and conducted interviews (n=8) with computing students. Our results suggest that although these models are being rapidly adopted, they have not yet fully eclipsed traditional help resources. The help-seeking resources that students rely on continue to vary depending on the task and other factors. Finally, we observed preliminary evidence about how help-seeking with generative AI is a skill that needs to be developed, with disproportionate benefits for those who are better able to harness the capabilities of LLMs. We discuss potential implications for integrating g
We propose leveraging prosocial observations to cultivate new social norms to encourage prosocial behaviors toward delivery robots. With an online experiment, we quantitatively assess updates in norm beliefs regarding human-robot prosocial behaviors through observational learning. Results demonstrate the initially perceived normativity of helping robots is influenced by familiarity with delivery robots and perceptions of robots' social intelligence. Observing human-robot prosocial interactions notably shifts peoples' normative beliefs about prosocial actions; thereby changing their perceived obligations to offer help to delivery robots. Additionally, we found that observing robots offering help to humans, rather than receiving help, more significantly increased participants' feelings of obligation to help robots. Our findings provide insights into prosocial design for future mobility systems. Improved familiarity with robot capabilities and portraying them as desirable social partners can help foster wider acceptance. Furthermore, robots need to be designed to exhibit higher levels of interactivity and reciprocal capabilities for prosocial behavior.
Posting help-seeking requests on social media has been broadly adopted by victims during natural disasters to look for urgent rescue and supplies. The help-seeking requests need to get sufficient public attention and be promptly routed to the intended target(s) for timely responses. However, the huge volume and diverse types of crisis-related posts on social media might limit help-seeking requests to receive adequate engagement and lead to their overwhelm. To understand this problem, this work proposes a mixed-methods approach to figure out the overwhelm situation of help-seeking requests, and individuals' and online communities' strategies to cope. We focused on the 2021 Henan Floods in China and collected 141,674 help-seeking posts with the keyword "Henan Rainstorm Mutual Aid" on a popular Chinese social media platform Weibo. The findings indicate that help-seeking posts confront critical challenges of both external overwhelm (i.e., an enormous number of non-help-seeking posts with the help-seeking-related keyword distracting public attention) and internal overwhelm (i.e., attention inequality with 5% help-seeking posts receiving more than 95% likes, comments, and shares). We dis
While reinforcement learning (RL) agents often perform well during training, they can struggle with distribution shift in real-world deployments. One particularly severe risk of distribution shift is goal misgeneralization, where the agent learns a proxy goal that coincides with the true goal during training but not during deployment. In this paper, we explore whether allowing an agent to ask for help from a supervisor in unfamiliar situations can mitigate this issue. We focus on agents trained with PPO in the CoinRun environment, a setting known to exhibit goal misgeneralization. We evaluate multiple methods for determining when the agent should request help and find that asking for help consistently improves performance. However, we also find that methods based on the agent's internal state fail to proactively request help, instead waiting until mistakes have already occurred. Further investigation suggests that the agent's internal state does not represent the coin at all, highlighting the importance of learning nuanced representations, the risks of ignoring everything not immediately relevant to reward, and the necessity of developing ask-for-help strategies tailored to the age
This project investigates factors that influence the perceived helpfulness of Amazon product reviews through machine learning techniques. After extensive feature analysis and correlation testing, we identified key metadata characteristics that serve as strong predictors of review helpfulness. While we initially explored natural language processing approaches using TextBlob for sentiment analysis, our final model focuses on metadata features that demonstrated more significant correlations, including the number of images per review, reviewer's historical helpful votes, and temporal aspects of the review. The data pipeline encompasses careful preprocessing and feature standardization steps to prepare the input for model training. Through systematic evaluation of different feature combinations, we discovered that metadata elements we choose using a threshold provide reliable signals when combined for predicting how helpful other Amazon users will find a review. This insight suggests that contextual and user-behavioral factors may be more indicative of review helpfulness than the linguistic content itself.
When people need help with their day-to-day activities, they turn to family, friends or neighbours. But despite an increasingly networked world, technology falls short in finding suitable volunteers. In this paper, we propose uHelp, a platform for building a community of helpful people and supporting community members find the appropriate help within their social network. Lately, applications that focus on finding volunteers have started to appear, such as Helpin or Facebook's Community Help. However, what distinguishes uHelp from existing applications is its trust-based intelligent search for volunteers. Although trust is crucial to these innovative social applications, none of them have seriously achieved yet a trust-building solution such as that of uHelp. uHelp's intelligent search for volunteers is based on a number of AI technologies: (1) a novel trust-based flooding algorithm that navigates one's social network looking for appropriate trustworthy volunteers; (2) a novel trust model that maintains the trustworthiness of peers by learning from their similar past experiences; and (3) a semantic similarity model that assesses the similarity of experiences. This article presents
Locating objects for the visually impaired is a significant challenge and is something no one can get used to over time. However, this hinders their independence and could push them towards risky and dangerous scenarios. Hence, in the spirit of making the visually challenged more self-sufficient, we present SonoVision, a smart-phone application that helps them find everyday objects using sound cues through earphones/headphones. This simply means, if an object is on the right or left side of a user, the app makes a sinusoidal sound in a user's respective ear through ear/headphones. However, to indicate objects located directly in front, both the left and right earphones are rung simultaneously. These sound cues could easily help a visually impaired individual locate objects with the help of their smartphones and reduce the reliance on people in their surroundings, consequently making them more independent. This application is made with the flutter development platform and uses the Efficientdet-D2 model for object detection in the backend. We believe the app will significantly assist the visually impaired in a safe and user-friendly manner with its capacity to work completely offline
Many quantum algorithms, to compute some property of a unitary $U$, require access not just to $U$, but to $cU$, the unitary with a control qubit. We show that having access to $cU$ does not help for a large class of quantum problems. For a quantum circuit which uses $cU$ and $cU^\dagger$ and outputs $|ψ(U)\rangle$, we show how to "decontrol" the circuit into one which uses only $U$ and $U^\dagger$ and outputs $|ψ(\varphi U)\rangle$ for a uniformly random phase $\varphi$, with a small amount of time and space overhead. When we only care about the output state up to a global phase on $U$, then the decontrolled circuit suffices. Stated differently, $cU$ is only helpful because it contains global phase information about $U$. A version of our procedure is described in an appendix of Sheridan, Maslov, and Mosca (arXiv:0810.3843). Our goal with this work is to popularize this result by generalizing it and investigating its implications, in order to counter negative results in the literature which might lead one to believe that decontrolling is not possible. As an application, we give a simple proof for the existence of unitary ensembles which are pseudorandom under access to $U$, $U^\dag
Image-based sexual abuse (IBSA), like other forms of technology-facilitated abuse, is a growing threat to people's digital safety. Attacks include unwanted solicitations for sexually explicit images, extorting people under threat of leaking their images, or purposefully leaking images to enact revenge or exert control. In this paper, we explore how people seek and receive help for IBSA on social media. Specifically, we identify over 100,000 Reddit posts that engage relationship and advice communities for help related to IBSA. We draw on a stratified sample of 261 posts to qualitatively examine how various types of IBSA unfold, including the mapping of gender, relationship dynamics, and technology involvement to different types of IBSA. We also explore the support needs of victim-survivors experiencing IBSA and how communities help victim-survivors navigate their abuse through technical, emotional, and relationship advice. Finally, we highlight sociotechnical gaps in connecting victim-survivors with important care, regardless of whom they turn to for help.
Helpful reviews have been essential for the success of e-commerce services, as they help customers make quick purchase decisions and benefit the merchants in their sales. While many reviews are informative, others provide little value and may contain spam, excessive appraisal, or unexpected biases. With the large volume of reviews and their uneven quality, the problem of detecting helpful reviews has drawn much attention lately. Existing methods for identifying helpful reviews primarily focus on review text and ignore the two key factors of (1) who post the reviews and (2) when the reviews are posted. Moreover, the helpfulness votes suffer from scarcity for less popular products and recently submitted (a.k.a., cold-start) reviews. To address these challenges, we introduce a dataset and develop a model that integrates the reviewer's expertise, derived from the past review history of the reviewers, and the temporal dynamics of the reviews to automatically assess review helpfulness. We conduct experiments on our dataset to demonstrate the effectiveness of incorporating these factors and report improved results compared to several well-established baselines.
Retrieval-Augmented Generation (RAG) has gained significant popularity in modern Large Language Models (LLMs) due to its effectiveness in introducing new knowledge and reducing hallucinations. However, the deep understanding of RAG remains limited, how does RAG help the reasoning process and can RAG help improve the reasoning capability remains question. While external documents are typically considered as a method to incorporate domain-specific information, they also contain intermediate reasoning results related to the query, this suggests that documents could enhance the reasoning capability of LLMs, which has not been previously explored. In this paper, we investigate this issue in depth and find that while RAG can assist with reasoning, the help is limited. If we conceptualize the reasoning process as a tree with fixed depth, then RAG struggles to assist LLMs in performing deeper reasoning. Additionally, the information in the documents requires preprocessing to filter out noise. We demonstrate that this preprocessing is difficult to achieve simply fine-tuning of the LLM, it often necessitates numerous additional transformer layers to solve the problem. To simplify the problem