To assess the change in accommodative response between electronic devices and hardcopy text after prolonged reading. There were 30 participants (N = 30), with a mean age of 24.29 ± 5.21 years. The accommodative response (lag or lead of accommodation, amplitude of accommodation, and accommodative facility) was measured before and after reading for 60 minutes from a printed book, a smartphone, and a laptop, with a 24-hour break between tasks. Secondary outcomes included comparisons of accommodative responses between hardcopy text and smartphone, smartphone and laptop, and laptop and hardcopy text. Accommodative lag was initially 0.50 ± 0.00 D, with significant increases observed after 60 minutes of reading from a smartphone (1.25 ± 0.50 D), a laptop (1.75 ± 0.50 D), and a hardcopy text (1.50 ± 0.25 D). A greater lag was noted with laptop use compared to smartphone reading (P < 0.01). Accommodative facility was significantly reduced when reading from a laptop compared to both a smartphone (P < 0.01) and a hardcopy (P < 0.01). The binocular mean amplitude of accommodation was -7.50 ± 1.25 D with the laptop and -8.00 ± 1.00 D with the hardcopy, with a P-value of 0.085, indicating no statistically significant difference. Prolonged near work significantly affects accommodative function, with laptops inducing the greatest accommodative lag and reduction in facility. Hardcopy reading preserved accommodative facility better than digital devices, while amplitude of accommodation showed minimal change. These results suggest that sustained laptop use may lead to greater visual strain compared to smartphones or printed text.
Veterans experiencing homelessness (hereinafter, homeless veterans), an important group in society, often have limited access to digital technologies, which may affect their ability to achieve social integration. Using data from annual national surveys of homeless-experienced veterans (HEV) from 2022 through 2024 (1992 in 2022, 2596 in 2023, and 2860 in 2024), this study compared their ownership of cell phone devices and computers or laptops and their use of the internet during a 3-year period. While we found no significant change in ownership of cell phones, we found significant increases from 2022 through 2024 in ownership of smartphones (from 69.1% to 72.3%) and computers or laptops (from 36.7% to 38.5%), as well as use of the internet at least occasionally (from 75.8% to 79.0%) and often (from 71.9% to 77.3%). We observed increased internet use among currently and formerly homeless veterans when we analyzed the samples separately. Together, these findings provide updated prevalence rates of digital technology use among HEV and highlight opportunities for technology-based interventions. More HEV are using digital technologies, but we estimate that more than one-fifth of HEV still do not have a cell phone or smartphone or use the internet at all. Although access to digital technologies has increased in this population, some gaps remain, and further research is needed on how to increase the uptake of new technologies.
E-learning patterns and resource utilization for learning different tasks and domains vary. As data on e-learning from the northeast part of India are scarce, understanding learner characteristics, device access, platform preferences, and e-learning behavior of undergraduate students from the region is essential to develop context-sensitive digital educational strategies. A cross-sectional questionnaire-based study was conducted among 176 Bachelor of Medicine and Bachelor of Surgery (MBBS) students at a tertiary care institute to investigate the pattern of device access, e-learning resources, benefits, and barriers. Associations between gender, MBBS phase, device ownership, and e-learning use were examined using chi-square tests and multivariate logistic regression to identify independent predictors. Smartphone ownership (161, 91.5%) and laptop ownership (82, 46.6%) were high. E-learning was broadly used for theory (168, 95.5%), practical skills (145, 82.4%), clinical cases (141, 80.1%), and assessments (94, 53.4%). YouTube videos were predominantly used for practical and surgical technique learning in 124 (70.5%) and 98 (55.7%), respectively. Awareness of the institutional learning management system (LMS) and National Programme on Technology Enhanced Learning (NPTEL) course attendance was low (22, 12.2%), while paid course attendance was substantial (107, 60.8%). Female students used e-learning significantly more for assessment, and senior MBBS students showed higher usage for clinical case learning and online assessments. Logistic regression revealed smartphone and laptop ownership and gender as significant predictors for engagement in assessment-related learning. E-learning adoption among medical undergraduates is device-dependent, with gender and academic phase influencing domain-specific engagement. The preference for paid courses and multimedia and social media-based resources highlights the self-directed learning behavior of the students and the need for the promotion of institutional and free national platforms.
Antimicrobial resistance surveillance in ESKAPEE pathogens (Enterococcus faecium, Staphylococcus aureus, Klebsiella pneumoniae, Acinetobacter baumannii, Pseudomonas aeruginosa, Enterobacter spp., and Escherichia coli) requires reproducible, portable whole-genome analysis that public health laboratories including those operating under data-sovereignty constraints can run on laptops, institutional servers, or cloud backends without local dependency conflicts. rMAP 2.0 addresses these needs using a containerized Workflow Description Language pipeline executed with Cromwell. rMAP 2.0 standardizes end-to-end bacterial whole-genome analysis-read quality control, trimming, assembly and annotation, resistance/virulence/mobile-element profiling, sequence typing, pangenome inference, and phylogenetic reconstruction using containerized execution, and generates a single interactive HTML report that collates outputs for rapid review. The workflow supports fully offline execution (including BLAST searches) for data-sovereign deployments and can run on local workstations, institutional servers, and cloud backends where Docker is supported, providing a consistent execution environment without local tool installation. In a representative benchmark of 20 Enterobacterales isolates, rMAP 2.0 completed a cohort run in ∼4.5 hours on an 8-core/16-GB laptop and flagged a public record misannotated in public repository metadata (SRR9703249, reclassified from K. pneumoniae to Enterobacter cloacae sequence type 182), while confirming lineage assignments such as E. coli sequence type 131. https://github.com/gmboowa/rMAP-2.0 and example workflow reports are available at: https://gmboowa.github.io/rMAP-2.0/.
Behavioral and psychological symptoms of dementia (BPSD) and delirium superimposed on dementia (DSD) can lead to severe complications if they are not accurately identified and managed. Effective dementia care therefore requires clear differentiation, systematic assessment, and appropriate nursing interventions. This study aimed to develop VRDementia: BPSD/DSD, a virtual reality simulation program, and to qualitatively examine its validity and usability as a development-based pilot study. Using the ADDIE model (Analysis, Design, Development, Implementation, Evaluation), the program was systematically developed. During the analysis phase, qualitative interviews and literature reviews identified educational needs among nurses in long-term care hospitals. Key challenges included distinguishing agitation/aggression (BPSD) from hyperactive DSD, and depression (BPSD) from hypoactive DSD. Based on these findings, four case-based scenarios were created. Content validity and usability were qualitatively evaluated through semi-structured interviews with five experienced nurses (≥5 years of clinical experience). The program consists of four sessions addressing agitation/aggression and depression (BPSD), and hyperactivity and hypoactivity (DSD). Nurses practice symptom assessment, therapeutic communication, physician reporting, and nursing interventions. The simulation is accessible via head-mounted display (HMD), mobile devices, and PC (including laptops). Qualitative feedback indicated that participants perceived the program as useful and applicable for dementia care education, including its potential use in interdisciplinary training contexts. VRDementia: BPSD/DSD is a valid, practical educational tool that improves nurses' competence in distinguishing and managing BPSD and DSD. This program may contribute to higher quality dementia care in clinical settings. Behavioral and psychological symptoms of dementia (BPSD) and delirium superimposed on dementia (DSD) can lead to severe complications if they are not accurately identified and managed. Effective dementia care therefore requires clear differentiation, systematic assessment, and appropriate nursing interventions. This study aimed to develop VRDementia: BPSD/DSD, a virtual reality simulation program, and to qualitatively examine its validity and usability as a development-based pilot study. Using the ADDIE model (Analysis, Design, Development, Implementation, Evaluation), the program was systematically developed. During the analysis phase, qualitative interviews and literature reviews identified educational needs among nurses in long-term care hospitals. Key challenges included distinguishing agitation/aggression (BPSD) from hyperactive DSD, and depression (BPSD) from hypoactive DSD. Based on these findings, four case-based scenarios were created. Content validity and usability were qualitatively evaluated through semi-structured interviews with five experienced nurses (≥5 years of clinical experience). The program consists of four sessions addressing agitation/aggression and depression (BPSD), and hyperactivity and hypoactivity (DSD). Nurses practice symptom assessment, therapeutic communication, physician reporting, and nursing interventions. The simulation is accessible via head-mounted display (HMD), mobile devices, and PC (including laptops). Qualitative feedback indicated that participants perceived the program as useful and applicable for dementia care education, including its potential use in interdisciplinary training contexts. VRDementia: BPSD/DSD is a valid, practical educational tool that improves nurses’ competence in distinguishing and managing BPSD and DSD. This program may contribute to higher quality dementia care in clinical settings.
The integration of digital technologies into pharmaceutical education is crucial for preparing future practitioners. This study aimed to comprehensively investigate the utilization, perceived importance, associated learning outcomes, and challenges related to technological tool usage among undergraduate pharmacy students. A descriptive cross-sectional study was conducted among undergraduate pharmacy students. Data were collected using a validated, structured, self-administered paper-based questionnaire. Perception scores were trichotomized for analysis. Statistical analysis employed descriptive statistics and multivariable logistic regression to identify predictors of poor perceived learning outcomes. The study included 549 undergraduate pharmacy students (female: 47.7% vs male: 52.3%) with a median age of 23 years (IQR: 20-25 years). Hardware utilization pattern shows that the majority of the participants (87.8%) utilized smartphones, followed by tablets (72.7%) and laptops (72.1%), while eBooks (65.0%), file-sharing tools (51.2%), and wikis (47.4%) dominated software use. Laptops and smartphones were consistently rated as highly important by over 50.0% of respondents. Also, students perceived strong technological support for collaboration (77.0% rated tools as highly important) and skill development (67.4% agreed technology connected learning to the real world). However, 42.6% of students reported poor perceived learning outcomes. Multivariable logistic regression revealed that age and year of study were the strongest predictors of poor outcomes, with younger students (AOR = 3.36; 95% CI: 1.39-8.12; p = 0.007) and third-year students (AOR = 27.73; 95% CI: 14.32-53.70; p < 0.001) having substantially increased odds compared to older and fifth-year students, respectively. The primary barriers were lack of steady electricity (20.0%), limited access to technology (19.7%), and poor internet connectivity (19.3%). While pharmacy students actively use and value digital tools, significant infrastructural barriers and a disconnect between perceived support and actual reported learning outcomes exist. Targeted interventions addressing technological access, power instability, and curriculum support, particularly for intermediate-level students, are urgently needed to realize the potential of educational technology.
This study evaluated two implementations of a reaction-time paradigm to assess spectrotemporal modulation sensitivity in cochlear implant (CI) users, aiming to support both clinical and research applications. Reaction times directly reflect task difficulty, enabling rapid testing with stimuli presented well above modulation detection thresholds. Twenty unilateral CI users completed a task involving the unpredictable onset of broadband and narrowband spectrotemporal modulations embedded in noise. Testing was conducted using two implementations: an app on a smartphone with direct wireless streaming to the CI processor and touchscreen responses ("App"), and a free-field setup with laptop and spacebar responses ("Laptop"), administered 2 to 3 months apart. Speech-in-noise perception was assessed with a matrix test. Reaction times showed strong within-participant consistency across implementations, demonstrating robustness over time and across different delivery and response setups. Individual differences in sensitivity to spectral and temporal modulations were evident and showed strong correspondence between the two implementations. Reaction-time-based modulation transfer functions matched those reported in previous psychophysical studies. Notably, reaction times correlated most strongly (r = 0.6-0.7) with speech-in-noise scores for spectrotemporal modulations relevant to speech, particularly spectral densities of 0.25-0.5 cycles/octave combined with temporal rates up to 16 Hz. These findings support the use of reaction times to measure spectrotemporal sensitivity in CI users.
Constructing and studying pangenome variation graphs (PVGs) supports new insights into viral genomic diversity. This is because such pangenomes are less prone to reference bias, which affects mutation detection. Interpreting the information arising from this is challenging, so automating these processes to allow exploratory investigations for PVG optimisation is essential. Moreover, existing methods do not scale well to the smaller virus genome sizes and to facilitate analysis in laptop environments. To address this, we developed an easily deployable pipeline to facilitate the rapid creation of virus PVGs that applies a broad range of analyses to these PVGs. We present Panalyze, a computationally scalable virus PVG construction, analysis and annotation tool implemented in NextFlow and containerised in Docker. Panalyze uses NextFlow to efficiently complete tasks across multiple compute nodes and in diverse computing environments. Panalyze can also operate on a single thread on a standard laptop, and analyse sequence lengths of any size. We illustrate how Panalyze works and the valuable outputs it can generate using a range of common viral pathogens. Panalyze is released under a MIT open-source license, available on GitHub with documentation accessible at https://github.com/downingtim/Panalyze/.
Anatomical photographs are essential in medical education and research as they document fine details of human anatomy. which may support visualization of dissection material. This study investigated the feasibility of an artificial intelligence (AI)-based image enhancement system for anatomical dissection photographs and explored whether subtle visual differences could be detected under magnification. A dataset of 50 anatomical photographs taken between 2001 and 2024 with four different digital cameras was processed using Upscayl (v2.11.5) with the preset "16× REAL-ESRGAN." Processing was performed on a Casper Excalibur G770 laptop, requiring approximately 3-5 min per image. Original and enhanced images were compared at magnifications of 1×, 5×, 10×, 15×, and 20× on a 55-in. Full HD display. Forty experts, including neuroanatomists and neurosurgeons, qualitatively assessed the images with respect to anatomical accuracy, noise reduction, edge definition, and training value. The visual differences between the original and enhanced images were generally subtle. However, subtle improvements in edge definition and noise reduction became more apparent in deep anatomical regions, such as ventricular cavities, particularly at higher magnification levels. High-resolution images showed limited observable differences, whereas lower-resolution images exhibited slightly more noticeable changes under magnification. The enhancement process did not introduce distortions of anatomical structures. A key limitation was the substantial increase in file size after enhancement. AI-based image enhancement appears feasible for anatomical dissection photographs and may provide modest visual benefits in selected settings, especially for older or lower-resolution images viewed at higher magnification. Further optimization is required to reduce file size and processing time before routine educational or publication use.
To assess responses, post-traumatic stress level, and awareness and preparedness among Thai dental students following the Sagaing earthquake, and to identify the factors associated with their traumatic stress level. A questionnaire survey was distributed via Google Form in April 2025 to dental students enrolled at the dental school. The questionnaire consisted of four sections: (1) demographic information; (2) experiences and responses during the earthquake; (3) post-traumatic stress, assessed using the post-earthquake trauma level determination scale; and (4) earthquake awareness and preparedness, assessed using the sustainable scale of earthquake awareness, both using five-point Likert scales. Associations between trauma scores and related variables were analysed using the Wilcoxon Rank Sum or the Kruskal-Wallis test, and multivariable negative binomial regression. Of 921 students, 287 completed the questionnaire. Initial perceptions during the earthquake were mainly dizziness and fatigue. Immediate responses included drop-cover-hold and stairway evacuation, with most students first grabbing mobile phones, followed by bags and laptops/tablets. Reported reactions focused on concern for loved ones, anxiety about future quakes, and greater appreciation of life and relationships. Multivariable analysis showed that living on the 8th floor or higher was significantly associated with higher post-traumatic stress scores compared to living in houses or lower floors. The earthquake caused low post-traumatic stress among Thai dental students, though stress was higher in high-rise residents. It increased appreciation of life and relationships. While the faculty response was effective, stronger city- and national-level disaster management is needed for future safety.
How do people discover an effective movement strategy when the environment abruptly changes-such as when using the trackpad on an unfamiliar laptop? Strategic adaptation is often described as a reinforcement learning process characterized by two key features: random exploration followed by gradual error reduction. We propose a different view in which strategic adaptation operates through hypothesis testing: learners generate specific action-outcome hypotheses about the environmental change, discount those that conflict with feedback, and continue testing alternatives until they discover the correct rule. To adjudicate between these accounts, we conducted two large-scale experiments using a visuomotor rotation task designed to isolate strategic adaptation under different target arrangements (N = 560). Individual learning trajectories showed pronounced exploration but were far from random, exhibiting structured, multimodal error distributions. Moreover, participants did not converge on the solution gradually; instead, they discovered it abruptly. Critically, strategic adaptation depended on target arrangement: some configurations steered participants toward the correct rotational hypothesis, whereas others led them to alternate between rotational and translational hypotheses. Together, these findings position hypothesis testing as a core mechanism governing strategic motor learning.
To assess the relationship between light-emitting diode device usage and premature ageing. The cross-sectional, descriptive study was conducted from October 2023 to May 2024 after approval from the ethics review committee of Pakistan Naval Ship Shifa Hospital, Karachi, and comprised individuals aged 27-40 years. Other than demographic characteristics, data was collected about light-emitting diode device usage and indicators of premature ageing based on self-reported and observed features. Data was analysed using SPSS 29. Of the 450 participants with mean age 32.4±3.7 years, 225(50%) each were males and females. Commonly used devices were mobile phones 400(88.9%), television 350(77.8%) and laptops 300(66.7%). Overall, 200(44.4%) subjects reported 5-7 hours of screen time, and 300(66.7%) did not use ultraviolet protection. Devices were used at a distance of 10- 20cm by 200(44.4%) subjects. In terms of premature ageing signs, the most common was dark circles 325(72.2%), while greying of hair was the least common 200(44.4%). All ageing variables showed a highly significant association with lightemitting diode usage (p<0.01), with the exception of greying of hair which demonstrated a significant association but at a lower level (p<0.05). There was a significant link between light-emitting diode device usage and premature ageing.
Adam is a 15-year-old boy who was born prematurely, with prenatal substance exposure, and was diagnosed in early childhood with combined-type attention-deficit hyperactivity disorder and oppositional defiant disorder. He was not found to meet criteria for fetal alcohol spectrum disorder. Despite treatment with stimulant medications and other adjunctive medications, Adam experienced ongoing difficulties with impulse control, sleep, and aggression. Adam was introduced to digital devices at an early age, resulting in unfiltered, poorly supervised, and prolonged screen exposure.Over time, Adam's digital use escalated into late-night gaming and engaging with social media platforms. Attempts by parents and other caregivers to apply parental controls were inconsistent because of family instability and ongoing caregiver substance use. Exposure to disturbing online content (including violence and conspiracy narratives) disrupted Adam's sleep and resulted in increased emotional lability, often triggering nightmares and severe irritability.The school also implemented restrictions on electronic device use (including phones and laptops). Related disciplinary consequences contributed to social stress and peer conflict. In addition, the patient disclosed a history of childhood sexual trauma, which occurred during unsupervised online interactions, further deepening his reliance on digital environments as a coping mechanism. Subsequent identification and treatment of posttraumatic stress disorder helped to alleviate some of the associated distressing emotional symptoms for Adam but did not alter his compulsive use of digital technology.Finally, when consistent efforts were made to limit Adam's screen time, it provoked severe mood dysregulation, aggressive outbursts, and even suicidal ideation.How can clinicians effectively manage digital addiction in a neurodivergent adolescent when restricting device use provokes severe emotional dysregulation and suicidal ideation?What multimodal treatment strategies can balance behavioral containment with trauma-informed care?How can families and clinicians collaboratively establish digital boundaries that promote recovery without triggering psychological destabilization?What does this case reveal about the need for early screening, prevention, and family education regarding digital addiction in neurodivergent youth?
Metabolic control analysis is used to understand regulation of metabolism and identify bottlenecks to be overcome in metabolic engineering for desired products. Its application has been hampered by the need for either parameterized models or carefully titrated experiments. In this study, we use thermodynamically feasible, sampled parameters to overcome this limitation. We use metabolic control analysis to explore central carbon metabolism of Saccharomyces cerevisiae growing in continuous culture under different nutrient limitations. Furthermore, we demonstrate shifts in flux control patterns in response to the different growth conditions and show how our results for specific reactions agree with the literature. Key advantages of the proposed framework include the incorporation of allosteric effectors, the use of omics data from a single steady-state time point and the computational efficiency; in all cases, 100 feasible models were sampled in less than 20 min on a laptop. The model and framework are freely available for researchers to use on their own data: https://github.com/biosustain/GRASP.git .
As Alzheimer's disease and related dementias studies incorporate remote assessments, people at higher risk, such as individuals from minority racial and ethnic groups, may be disadvantaged due to imbalances in access. Using data from the National Alzheimer's Coordinating Center Uniform Data Set, logistic regression models using generalized estimating equations with a random effect for study site were used to test the association of race and ethnicity, education, and their interactions with technology preferences and internet access. A total of 3,803 participants across 17 ADRCs (mean age 73.3 years [standard deviation [SD] = 9.9], education 16.4 years [SD = 2.7], 79% non-Hispanic White) were included. The effect of education on internet access via desktop, laptop computer, and tablet was greater in non-White participants than in non-Hispanic White participants. A similar pattern was observed for interest in using devices for study visits. Education may have a role in racial and ethnic differences in technological access and preferences.
Apps and web browsers, whether on smartphones, tablets or laptops, have become the mainstream methods of capturing clinical outcome assessment (COA) data, and patient-reported outcome measures (PROMs) in particular, in clinical trials. There has long been concern around whether implementing traditionally paper-based questionnaires on electronic systems may impact the measurement properties of these carefully validated questionnaires, with migration best practices focusing predominately on this issue. In parallel, app and web-design best practices outside of clinical research have focused on the accessibility and usability of these electronic systems for the widest range of users. This article evaluates how existing web accessibility success criteria compare to electronic PROM (ePROM) design best practices, identifying where there is alignment or tension, where further evidence is needed to ensure both accessibility and the maintenance of the questionnaire measurement properties, and what accessibility practices can be incorporated into ePROM design best practices today. The online version contains supplementary material available at 10.1186/s41687-026-01039-8.
Computer vision syndrome (CVS) is an increasingly prevalent ocular health concern among medical students due to visual display terminal (VDT) dependent learning methods. The purpose of this study was to assess the prevalence of CVS among medical students and the effectiveness of a 2-week ergonomic intervention in reducing its symptoms. Medical students between 17 and 25 years of age, using VDT for ≥6 months with corrected visual acuity of 6/6 were included. CVS score was recorded using a validated questionnaire. Participants attended a 1-h ergonomic counseling session, followed by a 2-week intervention period where adherence was tracked using an objective assessment card. On day 15, the preergonomic session CVS score was compared with postsession score to evaluate the efficacy of ergonomic modifications. In the study of 114 participants (90 males and 24 females), cellphones (100%), laptops (46%), and tablets (40%) were the most common VDTs used. At baseline, the median CVS score was 4.00 (inter-quartile ratio [IQR]: 2.00-8.00), with 41.2% reporting CVS symptoms. Postintervention, CVS symptoms were reported in 18.4% participants. The median CVS score decreased significantly to 2.00 (IQR: 1.00-5.00; Wilcoxon signed-rank test, Z = -6.525, P < 0.0001). Symptoms like burning, itching, and tearing improved significantly (P < 0.0001), whereas others, including headache and blurred vision, showed no significant improvement. CVS is an increasingly prevalent ocular pathology among medical students. Adopting correct ergonomic practices can play a crucial step in the management of many of its symptoms, underscoring the importance of sensitizing VDT users to correct ergonomic principles.
Previous evidence on the associations of dairy intake with risk of cardiometabolic diseases has been inconsistent with studies showing inverse, null, or positive associations. We aimed to assess these associations in China, where dairy consumption level is low and cardiometabolic disease patterns differ from those in the West. The China Kadoorie Biobank is a prospective cohort study with ∼512,000 adult participants recruited from 10 diverse localities in China during 2004-2008. At baseline and periodic resurveys, information on the consumption frequency of major food groups was collected using a validated interviewer-administered laptop-based questionnaire. During ∼ 5.4 million person-years of follow-up, 18,306 diabetes, 33,946 ischemic heart diseases [IHD, including 3888 acute myocardial infarction (MI)], 33,670 ischemic stroke, 7191 intracerebral hemorrhage (ICH) cases, and 13,241 cardiovascular deaths were recorded. Cox regression was used to calculate adjusted hazard ratios (HRs) relating dairy intake to cardiometabolic disease risk. At baseline, 10.7% of participants regularly consumed (i.e., ≥4 d/wk) dairy products, whereas 70.0% reported never or rare consumption. After adjusting for potential confounders including body mass index, dairy consumption was significantly and positively associated with IHD but inversely associated with risks of acute MI, ICH and cardiovascular death, with HRs for regular consumers compared with nonconsumers being 1.09 (95% CI: 1.06, 1.12), 0.88 (0.80, 0.98), 0.69 (0.62, 0.76), and 0.82 (0.77, 0.87), respectively, but not with diabetes and IS. These associations were largely independent of systolic blood pressure. In Chinese adults, higher dairy consumption was associated with lower risks of acute MI, ICH, and cardiovascular death. Future studies are warranted to further elucidate these relationships and their causality.
Anchoring is a prominent judgment bias which causes people's estimates of uncertain quantities to assimilate towards recently encountered values. Here, we ask whether items can cause anchoring - will the question "Does a handheld flashlight torch cost more or less than a laptop?" induce anchoring in the same way as "Does a handheld flashlight torch cost more or less than £500"? We present evidence from ten studies suggesting that it can, and that perceptions of the value of the anchor item (e.g., the laptop) are also anchored. In other words, estimates for both items being compared assimilate towards each other. We also find that low value items are anchored more strongly than high value items. Overall, there is evidence for a small anchoring-by-items effect (Hedge's g = 0.25), which we suggest previous studies may have been underpowered to detect. Among existing theories of anchoring, Selective Accessibility would appear to provide the best account of the data.
Shallow recurrent decoders (SHRED) are effective for system identification and forecasting from sparse sensor measurements. Such models are lightweight and computationally efficient, allowing them to be trained on consumer laptops. SHRED-based models rely on recurrent neural networks (RNNs) and a simple multi-layer perceptron (MLP) for temporal encoding and spatial decoding, respectively. Despite the relatively simple structure of SHRED, they are able to predict chaotic dynamical systems on different physical, spatial and temporal scales directly from a sparse set of sensor measurements. In this work, we modify SHRED by leveraging transformer-SHRED (T-SHRED) embedded with symbolic regression for the temporal encoding, circumventing auto-regressive long-term forecasting for physical data. This is achieved through a new sparse identification of nonlinear dynamics (SINDy) attention mechanism into T-SHRED to impose sparsity regularization on the latent space, which also allows for immediate symbolic interpretation. Symbolic regression improves model interpretability by learning and regularizing the dynamics of the latent space during training. We analyse the performance of T-SHRED on three different dynamical systems ranging from low-data to high-data regimes. This article is part of the discussion meeting issue 'Symbolic regression in the physical sciences'.