CubeSats have emerged as an enabling technology for a new generation of space missions, offering relatively low cost and rapid development opportunities for scientific, educational, and technology demonstration. The reliable operation of a CubeSat depends critically on its electrical power system (EPS), which serves as the primary energy backbone for all onboard subsystems and payloads. Several studies have indicated that EPS as a subsystem is most susceptible to failure in the satellite missions, as it is under tight power, volume, and reliability constraints exposed to harsh and variable orbital conditions. In this context, this paper proposes the design, hardware implementation and experimental validation of an autonomous EPS architecture to enhance the operational lifetime of CubeSats. The proposed EPS autonomously manages energy to deal with solar irradiance variations in the low earth orbit (LEO). It combines maximum power point tracking (MPPT), battery management system (BMS), regulated power distribution and sensor based telemetry based on a microcontroller unit (MCU) controlled system. A real time decision making algorithm autonomously monitors the photovoltaic (PV) array voltage. The supervisory algorithm implements a three mode graduated control strategy, i.e. normal, moderate, and power-down. It is governed by two discrete PV voltage thresholds, enabling more precise and graduated load management compared to binary single threshold schemes reported in prior work. When nominal irradiance level is regained, the system returns autonomously to desired operational mode with no interference needed from the ground station. The modular design of the system allows to upgrade its components easily without the need to redevelop entire architecture. A detailed power budget analysis yields a total system load of 1,460mW, divided between telemetry (22mW), payload (1,400mW) and communication subsystems (38mW). The deployed EPS uses a Li-ion 3-cell battery pack (11.1V, each cell 1,800mAh) and PV panels of (12V, 2,100mW). Experimental tests validate the acquisition of data from various sensors and importantly accurate mode transition from normal to power down mode and vice-versa under fluctuating irradiance conditions. Furthermore, dynamic experiments involving controlled variation of PV voltage are also conducted to evaluate both degradation and recovery behavior of the system. The results demonstrate stable, repeatable, and threshold consistent mode transitions under varying input power scenarios. The results collectively demonstrate that autonomous mode transition and load management can be achieved using low cost commercial off the shelf components (COTS), making the proposed EPS a practical, reproducible, and scalable testbed for academic and small mission CubeSat platforms.
Early-stage detection of cancer is a key factor for successful treatment and improved survival. Yet, current screening approaches are often invasive, expensive, and limited to single biomarkers, which constrains their applicability at a global scale. Breath analysis offers a promising, non-invasive alternative capable of detecting multiple cancer biomarkers simultaneously, with potential to reduce diagnostic inequality across low- and high-income countries. Among available technologies, electronic noses (E-noses) have emerged as powerful platforms for detecting volatile organic compounds (VOCs) in exhaled breath. This review critically discusses advances in metal oxide (MOX)-based E-nose systems, highlighting the transition from individual sensors to integrated multisensor array chip (MSAC) architectures, advanced signal conditioning, and machine-learning (ML)-assisted data analysis pipelines. The working principle of ML-assisted MOX E-noses, including sensor array response acquisition, feature extraction, dimensionality reduction, and classification of complex VOC mixtures, is systematically analyzed, with particular attention to drift, cross-sensitivity, and real-world variability. Distinct from prior reviews, this work integrates a systematic analysis of cancer-related VOCs with recent breakthroughs (2025-2026) in hybrid sensing modalities, hardware-level drift mitigation, and clinical translation barriers. By bridging material-level innovations with system-level performance metrics and real-world deployment challenges, this work provides a critical framework for the development of reliable, scalable, and real-time E-nose technologies for multicancer diagnostics.
Cable-driven parallel robots (CDPRs) are attractive for large-space manipulation because of their lightweight structure, large workspace, and reconfigurability. However, existing systems still face three practical challenges: limited modularity of the mechanical architecture, repeated calibration after reconfiguration, and insufficient integration between visual perception and grasp execution. To address these issues, this paper presents a modular cable-driven parallel robot (MCDPR), together with its kinematic modeling, vision-based self-calibration, and visual grasping methods. First, a modular mechanical architecture is developed in which the drive, sensing, and cable-guiding functions are integrated to support rapid assembly/disassembly, convenient debugging, and cable anti-slack operation. Second, a pulley-considered multilayer kinematic model is established, and a vision-based self-calibration method is proposed to identify the structural parameters after assembly using onboard sensing and AprilTag observations, thereby reducing the number of recalibrations required during robot operation after reconfiguration. Third, a vision-guided bin-picking method is developed by combining RGB-D perception, coordinate transformation, and the calibrated robot model. Simulation and prototype experiments are conducted to validate the proposed system. A software/hardware combined validation framework is established, in which the CoppeliaSim-based simulation and the hardware prototype are used together to verify the proposed design and methods. In simulation, self-calibration reduces the Euclidean grasping position error from 0.371 mm to 0.048 mm and the orientation error from 0.071° to 0.004°. In experiments, the relative position error is reduced by 58.33% after self-calibration.
Singular optics is a branch of modern electromagnetics and optics that investigates solutions to Maxwell's equations that exhibit nontrivial topological features under various boundary conditions. These solutions give rise to light fields containing singularities, points or regions at which certain optical properties, such as phase or polarization, become undefined. Over time, singular optics has evolved into a unifying framework for understanding and engineering optical fields that possess phase, polarization, coherence, and spatiotemporal singularities, each characterized by quantized topological properties. Such structured light fields enable high-dimensional information encoding, robust light-matter interactions, and sensitive probing of complex media, thereby impacting optical communication, imaging, sensing, and materials processing. Parallel advances in theory, fabrication techniques, detection hardware, and computational methods have created a diverse and rapidly expanding landscape, underscoring the need for an integrated and forward-looking perspective. This roadmap synthesizes emerging applications of singular optics across multiple platforms, offering a concise overview of current developments and highlighting key physical concepts, new architectures, and transformative technologies that bridge subfields. In addition to reflecting the insights of leading contributors to these research directions, it also surveys selected recent advances, providing a concise overview of current trends and a foundation for shaping the future of singular optics and its applications.
As global interconnectivity continues to intensify across digital and physical infrastructures, the pursuit of sophisticated hardware-level security mechanisms that seamlessly intertwine these domains has become increasingly vital. Physically unclonable functions (PUFs) have emerged as intrinsic identifiers that exploit unavoidable physical variations to ensure authenticity and tamper resistance. Early generations of PUFs-implemented through single-mode architectures such as electrical or optical configurations-demonstrated the foundational potential of device-intrinsic randomness for secure authentication. Electrical PUFs capitalize on stochastic charge transport and interface disorder, while optical PUFs harness complex light-matter interactions to achieve high entropy and physical uniqueness. Building upon these single-domain systems, recent advances have driven the evolution toward multidimensional and reconfigurable PUFs, integrating multiple transduction pathways and tunable material responses. Such hybrid architectures expand the challenge-response landscape, enhance adaptability under varying conditions, and enable programable security characteristics. This review traces the progression from conventional single-domain PUFs to emerging multidimensional systems, highlighting advances in materials, device integration, and adaptive design. Finally, we discuss persisting limitations and outline prospects for developing intelligent, scalable, and resilient PUF platforms for next-generation cyber-physical networks.
In the past decade there has been an increasing attention towards the field of musical haptics for the listener, which concerns the creation and evaluation of systems conceived for conveying or augmenting music signals through the sense of touch. The primary purposes of these systems are to enrich the musical experience of hearing listeners and to enhance music perception for listeners with hearing loss. This paper (Part I of a two-part review) surveys the state of the art in musical haptics systems for listeners, with a focus on technologies, system architectures, and design strategies. We introduce a taxonomy that classifies existing systems according to their application domains, accessibility focus, actuator technologies, haptic rendering approaches, form factors, and deployment contexts, and we apply it to a systematically collected body of literature. Building on this analysis, we describe the core components of musical haptics systems and examine trends in wearability and connectivity, and the needed shift from laboratory prototypes toward live and ecologically-valid scenarios. We further discuss technical challenges related to offline and real-time processing, latency, synchronization, and the deployment of machine-learning techniques on embedded hardware. Part II of this review will complement the technical perspective presented here with a survey of the perceptual, emotional, and behavioral effects of musical haptics systems on both hearing and Deaf and Hard-of Hearing users.
We present an experimentally validated measurement-domain steganography framework based on orthogonal ghost imaging (OGI). In the proposed scheme, a compact single-pixel OGI front-end with orthogonal illumination patterns generates the cover measurements, while a hash-initialized logistic map drives a chaotic keystream that is embedded in the least significant bits (LSBs) of the bucket intensities. This integration into a single optical-digital pipeline avoids holographic components, deep-learning-based reconstruction, and multi-channel architectures, and keeps both the hardware and computational complexity moderate enough for resource-constrained optical platforms. A combination of numerical simulations and single-pixel optical experiments shows that the framework supports imperceptible embedding with stable statistical behaviour. Under the reported operating conditions, the stego ghost images retain PSNR values above 50 dB, SSIM exceeding 0.996, and NPCR close to 99.98%, while intensity histograms, pixel correlations, higher-order moments, and chi-square statistics of the LSB plane remain almost unchanged. These results indicate that chaos-encrypted, bucket-domain LSB modulation can be effectively hidden within the speckle-like statistics and correlation-based reconstruction of OGI, allowing reliable message recovery without visible artifacts while exhibiting statistical consistency with basic steganalysis tests in the OGI domain. Overall, the proposed approach provides a proof-of-concept route to measurement-domain information hiding with hardware-generated covers. Owing to the use of a compact single-pixel architecture and lightweight digital processing, the framework suggests potential relevance to optical scenarios in which system complexity and processing resources are constrained. These considerations are based on architectural simplicity rather than measured power, volume, or embedded runtime metrics. A full system-level implementation and experimental validation on specific embedded or mobile platforms remain beyond the scope of this work and are left for future investigation.
The paper introduces a computationally efficient semantic-aware route planning framework for indoor mobile robots, designed for real-time execution on resource-constrained edge hardware (Raspberry Pi 5, CPU-only). The proposed architecture fuses monocular object detection with 2D LiDAR-based range estimation and integrates the resulting semantic annotations into the Nav2 Route Server for penalty-weighted route selection. Object localization in the map frame is achieved through the Angular Sector Fusion (ASF) pipeline, a deterministic geometric method requiring no parameter tuning. The ASF projects YOLO bounding boxes onto LiDAR angular sectors and estimates the object range using a 25th-percentile distance statistic, providing robustness to sparse returns and partial occlusions. All intrinsic and extrinsic sensor parameters are resolved at runtime via ROS 2 topic introspection and the URDF transform tree, enabling platform-agnostic deployment. Detected entities are classified according to mobility semantics (dynamic, static, and minor) and persistently encoded in a GeoJSON-based semantic map, with these annotations subsequently propagated to navigation graph edges as additive penalties and velocity constraints. Route computation is performed by the Nav2 Route Server through the minimization of a composite cost functional combining geometric path length with semantic penalties. A reactive replanning module monitors semantic cost updates during execution and triggers route invalidation and re-computation when threshold violations occur. Experimental evaluation over 115 navigation segments (legs) on three heterogeneous robotic platforms (two single-board RPi5 configurations and one dual-board setup with inference offloading) yielded an overall success rate of 97% (baseline: 100%, adaptive: 94%), with 42 replanning events observed in 57% of adaptive trials. Navigation time distributions exhibited statistically significant departures from normality (Shapiro-Wilk, p < 0.005). While central tendency differences between the baseline and adaptive modes were not significant (Mann-Whitney U, p = 0.157), the adaptive planner reduced temporal variance substantially (σ = 11.0 s vs. 31.1 s; Levene's test W = 3.14, p = 0.082), primarily by mitigating AMCL recovery-induced outliers. On-device YOLO26n inference, executed via the NCNN backend, achieved 5.5 ± 0.7 FPS (167 ± 21 ms latency), and distributed inference reduced the average system CPU load from 85% to 48%. The study further reports deployment-level observations relevant to the Nav2 ecosystem, including GeoJSON metadata persistence constraints, graph discontinuity ("path-gap") artifacts, and practical Route Server configuration patterns for semantic cost integration.
This paper addresses the critical communication barrier experienced by deaf and hearing-impaired individuals in the Arab world through the development of an affordable, video-based Arabic Sign Language (ArSL) recognition system. Designed for broad accessibility, the system eliminates specialized hardware by leveraging standard mobile or laptop cameras. Our methodology employs Mediapipe for real-time extraction of hand, face, and pose landmarks from video streams. These anatomical features are then processed by a hybrid deep learning model integrating Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs), specifically Bidirectional Long Short-Term Memory (BiLSTM) layers. The CNN component captures spatial features, such as intricate hand shapes and body movements, within individual frames. Concurrently, BiLSTMs model long-term temporal dependencies and motion trajectories across consecutive frames. This integrated CNN-BiLSTM architecture is critical for generating a comprehensive spatiotemporal representation, enabling accurate differentiation of complex signs where meaning relies on both static gestures and dynamic transitions, thus preventing misclassification that CNN-only or RNN-only models would incur. Rigorously evaluated on the author-created JUST-SL dataset and the publicly available KArSL dataset, the system achieved 96% overall accuracy for JUST-SL and an impressive 99% for KArSL. These results demonstrate the system's superior accuracy compared to previous research, particularly for recognizing full Arabic words, thereby significantly enhancing communication accessibility for the deaf and hearing-impaired community.
The increasing energy and bandwidth demand of modern AI workloads highlight the need for hardware that mitigates the data-movement bottleneck of von Neumann architectures. Compute-in-memory and neuromorphic systems offer a compelling solution, yet reliable multi-tier 3D integration of analog synaptic devices remains challenging. Here, we report a monolithic 3D (M3D) integration platform that vertically stacks In-Ga-Zn-O (IGZO) access transistors and Hf0.5Zr0.5O (HZO)-based ferroelectric transistors to realize compact, energy-efficient neuromorphic hardware. Two-tier and four-tier IGZO/ferroelectric field effect transistors (FeFET) architectures were fabricated with excellent structural integrity, uniform elemental profiles, and preserved orthorhombic HZO ferroelectricity across all tiers. The devices exhibit reproducible switching, >10-year retention, endurance up to 101 1 cycles, and stable multilevel conductance states suitable for synaptic computing. Mapping device characteristics to a convolutional neural network (CNN) for Canadian Institute for Advanced Research (CIFAR-10) inference yields 95.0% (tier-2) and 95.5% (tier-4) accuracy, approaching the 96.1% software baseline. Analog-domain convolution was further demonstrated by encoding kernel weights into FeFET conductance states for edge-aware image processing. These results establish M3D-integrated FeFETs as a scalable and reliable platform for next-generation compute-in-memory and neuromorphic vision applications.
Spiking Neural Networks (SNNs) executed on neuromorphic hardware promise energyefficient, low-latency inference well-suited to edge deployment in size, weight, and powerconstrained environments such as autonomous vehicles, wearable devices, and unmanned aerial platforms. However, a coherent research pathway to deployment of neuromorphic devices remains elusive. This paper presents a structured review and position on the state of SNN-based vision across four interconnected dimensions: network architectures, training methodologies, event-based datasets and simulation techniques, and neuromorphic computing hardware. We survey the evolution from shallow convolutional SNNs to spiking Transformers and hybrid designs which leverage the advantages of SNNs and conventional artificial neural networks. We also examine surrogate gradient training and ANN-to-SNN conversion approaches, catalogue real-world and simulated event-based datasets, and assess the landscape of neuromorphic platforms ranging from rigid mixed-signal architectures to fully-configurable digital systems. Our analysis reveals that while each area has matured considerably in isolation, critical integration challenges persist. In particular, event-based datasets remain scarce and lack standardisation, training methodologies introduce systematic gaps relative to deployment hardware, and access to neuromorphic platforms is restricted by proprietary toolchains and limited development kit availability. We conclude that bridging these integration gaps, rather than advancing individual components alone, represents the most important and least addressed work required to realise the potential of SNN-based vision at the edge.
The digitization of biological specimens has revolutionized morphology, generating massive 3D datasets such as microCT scans. While open-source platforms like 3D Slicer and SlicerMorph have democratized access to advanced visualization and analysis software, a significant "compute gap" persists. Processing high-resolution 3D data requires high-end GPUs and substantial RAM, resources that are frequently unavailable at Primarily Undergraduate Institutions (PUIs) and other educational settings. This "digital divide" prevents many researchers and students from utilizing the very data and software that have been made open to them. We present MorphoCloud, a platform designed to bridge this hardware barrier by providing on-demand, research-grade computing environments via a web browser. MorphoCloud utilizes an "IssuesOps" architecture, where users manage their remote workstations entirely through GitHub Issues using natural-language commands (e.g., /create, /unshelve). The technology stack leverages GitHub Issues and Actions for front-end and orchestration respectively, JetStream2 for backend compute, and Apache Guacamole to deliver a high-performance, GPU-accelerated desktop experience to any modern browser. The platform enables a streamlined lifecycle for remote instances, which come pre-configured with the SlicerMorph ecosystem, R/RStudio, and AI-assisted segmentation tools like NNInteractive and MEMOs. Users have access to a persistent storage volume that is decoupled from the instance. For educational purposes, MorphoCloud supports "Workshop" instances that allow for bulk provisioning and stay online continuously for short-term events. This identical environment ensures that instructors can conduct complex 3D workflows without the typical troubleshooting delays caused by heterogeneous student hardware. MorphoCloud demonstrates that true scientific accessibility requires not just open data and software, but also open infrastructure. By abstracting the complexities of cloud administration into a simple, command-driven interface, MorphoCloud empowers researchers at under-resourced institutions to engage in high-performance morphological analysis and AI-assisted segmentation.
Deep learning models for medical image analysis often rely on large-scale parameterization, which may limit their practical use in resource-constrained settings. This study aims to design a structurally compact multi-source framework capable of delivering competitive diagnostic performance with reduced computational overhead. We propose ML-ConvNet, a lightweight architecture comprising approximately 4.2 K parameters and 924 M FLOPs at 512×512 input resolution. The network incorporates Multi-Branch Re-parameterized Convolutions for scale-aware feature extraction, Hierarchical Dual-Path Attention for feature localization, Feature Self- Transformation for cross-feature interaction, and a Local Variance Weighted optimization strategy to address class imbalance. The framework is evaluated independently on three publicly available benchmark datasets representing heterogeneous imaging modalities: brain MRI, lung CT, and chest X-ray. Ablation studies, precision-recall analysis, cross-modality validation, and computational benchmarking are conducted to assess performance, stability, and efficiency under controlled experimental conditions. Within the evaluated settings, results indicate competitive diagnostic accuracy relative to established lightweight baselines, including EfficientNet and MobileNet variants, while substantially reducing parameter count. Class-wise F1-scores and PR-AUC values suggest relatively stable minority-class performance under repeated cross-validation sampling. Attention visualizations show activations concentrated over regions broadly associated with pathological findings, though these observations are qualitative in nature. Inference latency measurements on CPU and mobile hardware suggest feasibility for low-latency deployment under the tested single-image batch configurations, though real-world throughput may differ depending on hardware and operational conditions. These findings suggest that careful architectural design and domain-informed inductive biases may support competitive medical image classification on public benchmark datasets without extensive parameter scaling. The framework was evaluated exclusively under controlled conditions on publicly available data, and multi-institutional external validation is required before conclusions regarding generalizability or clinical applicability can be drawn.
Hafnium-based ferroelectric (Hf-FEs) materials overcome the limitations of perovskite ferroelectric materials. Owing to their compatibility with the complementary metal-oxide-semiconductor process and high scalability, Hf-FEs devices have driven the development of non-volatile memory and neuromorphic computing, demonstrating significant potential for application in image processing and in-memory logic operations. First of all, this paper summarizes the material systems, device structure, and physical mechanisms relevant to Hf-FEs devices. Then, for the purpose of achieving efficient neuromorphic computing, the evaluation parameters related to Hf-FEs devices, specifically concerning storage performance and synaptic plasticity, are discussed. Furthermore, the progress of Hf-FEs devices in arrays and hardware integration is systematically reviewed, offering insights for future applications. Finally, this study explores in depth the prospects and challenges of these devices in advanced applications, providing valuable guidance for the development of high-performance neuromorphic computing devices.
Expressive piano performance poses extreme challenges for robotic manipulation, necessitating high-speed repetitive impacts, substantial force output, and coordinated multi-joint control under stringent dynamic constraints. However, existing robotic systems exhibit significant limitations in replicating human-level dexterity, as well as achievable motion speed and force output. This work presents a data-driven, bio-inspired dexterous robotic hand designed specifically for high-fidelity piano performance. We first extract kinematic primitives and stable inter-joint coupling patterns from large-scale motion capture data of professional pianists. These human motion priors are directly embedded into the mechanical architecture through morphological coupling and actuator allocation. Actuator selection is further guided by empirically measured human peak velocities and force profiles from biomechanics literature, ensuring sufficient bandwidth for high-speed repetitive motion and adequate force transmission. Experimental results demonstrate that the proposed hand replicates human-like joint coordination, achieves peak joint velocities of 53.88 rad/s, and provides sufficient fingertip force for authentic piano interaction. As a demonstration of its capabilities, the hand successfully performs a Grade 7 piano piece, Croatian Rhapsody, illustrating its potential for expressive musical performance. This research establishes a principled pathway from human motion statistics to embodied robotic intelligence, providing a high-performance hardware foundation for autonomous musical performance.
Growing interest in extended reality and artificial intelligence (AI) over the past decade has led to the development of a variety of powerful methods with considerable potential for blended learning. However, with great power comes great responsibility. To date, blended learning tools still struggle with the incorporation of practical regulations, preserving natural human presence in the use of AI systems (AIS). This article introduces Open Auditorium (OA), an extended reality blended learning (XR-BL) tool with human-centric ethical conditions of use (COU). Building on conceptual guidelines from previous studies, OA requires educators and learners to be natural human actors, while AIS assume vital yet strictly supportive roles. Furthermore, OA combines a diverse range of input modalities and interactions from pedagogy, extended reality and AI with blended learning. Designed for researchers, educators, and learners, OA aims to make research, teaching, and learning more interactive and expressive. The software architecture of OA is based on the Robot Operating System (ROS) and open source, providing a modular tool that encourages further community-driven research and development. This technology and code article summarizes key design steps, essential features, hardware and software architecture, and implementation of the tool, including a comparison between selected ethical and unethical use cases. Ultimately, this work could provide a reference for the design and implementation of future similar tools and help decision-makers to facilitate a responsible use of AIS in education.
The rapid growth of artificial intelligence, ubiquitous sensing, and edge computing is exposing fundamental limitations of conventional von Neumann architectures, in which the physical separation of sensing, memory, and computation leads to excessive data movement, high energy consumption, and latency. As transistor scaling slows in the post-Moore era, architectural innovation has become essential to sustain progress in intelligent systems. In-sensor-memory computing (ISMC) addresses these challenges by co-locating perception, storage, and computation within unified device and system architectures, enabling in situ signal processing, mixed-signal computation, and event-driven intelligence at the data source. Recent advances in memristive and ferroelectric devices, low-dimensional and multifunctional materials, three-dimensional heterogeneous integration, and neuromorphic architectures have significantly expanded the functional scope of ISMC platforms. In parallel, the co-evolution of algorithms-including spiking neural networks, reservoir computing, and neuromorphic compilers-has facilitated the translation of device-level advantages into system-level performance. This perspective surveys the technological foundations, architectural trends, and emerging applications of ISMC, examines global industry-academia-research (IAR) collaboration, and outlines key challenges related to variability, reliability, scalability, and benchmarking. Collectively, ISMC is positioned as a post-von Neumann hardware paradigm for energy-efficient, distributed intelligence.
Falls are a leading cause of injury and mortality among older adults, motivating growing interest in video-based computer vision (CV) fall detection systems. This study presents a systematic mapping of vision-based fall detection in video, synthesizing evidence from 433 primary studies published through 2025 and retrieved from five databases using explicit eligibility criteria and independent dual screening within a Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA)-inspired workflow. We characterize the field through: (i) a structured three-level taxonomy grouping approaches into Feature Engineering, Deep Learning, and hybrid models; (ii) a quantitative analysis of commonly used algorithmic components and their reported performance across datasets and metrics; and (iii) a dedicated assessment of efficiency and deployment evidence (e.g., frames per second (FPS), latency, and hardware/platform reporting). Our findings indicate the predominance of Deep Learning pipelines-particularly convolutional neural network (CNN)-based backbones-together with a sustained prevalence of hybrid designs, while Transformers/Attention architectures show accelerated adoption in recent years. Despite frequent real-time claims, efficiency metrics and hardware specifications remain inconsistently reported, limiting reproducibility and clinical translatability. Overall, this mapping consolidates trends, benchmarks, and reporting practices, and identifies research gaps that hinder reproducibility and clinical translation.
The reconstruction of complex orbitomaxillary defects requires biomaterials that can simultaneously provide structural stability, biocompatibility, and accurate restoration of facial volume and contour. While rigid polymers such as polyetheretherketone (PEEK) offer reliable mechanical support, they do not adequately replicate the viscoelastic behavior of soft tissues. This report presents a translational revision case employing a personalized hybrid biomaterial approach that combines a 3D-printed PEEK implant for structural orbital floor support with a patient-specific polydimethylsiloxane (PDMS) implant for malar volumetric augmentation. Reconstruction was planned using CT segmentation and contralateral mirroring. Patient-specific implants were subsequently designed using CAD/CAM techniques, combining a rigid PEEK implant for structural orbital support with a flexible PDMS implant for malar volumetric augmentation with complementary mechanical properties. Revision surgery included the removal of inadequately positioned titanium hardware, the release of incarcerated extraocular muscles, and the restoration of orbital anatomy and facial symmetry. Postoperative imaging demonstrated stable implant positioning and sustained orbitomaxillary stability. Despite successful anatomical reconstruction, residual functional sequelae, including strabismus related to the severity of the initial orbital trauma, persisted and were addressed separately in a staged manner, resulting in satisfactory ocular alignment and resolution of diplopia in primary gaze. This case underscores the complementary functional roles of rigid and elastic polymers and highlights the translational potential of PDMS as a permanent, patient-specific implant material for volumetric and contour restoration in craniofacial reconstruction.
Exponential growth in global computing demand is further exacerbated by the high energy requirements of conventional architectures, which are dominated by costly data movement requirements. In-memory computing with Resistive Random Access Memory (RRAM) addresses this challenge by co-integrating memory and processing, but faces tremendous hurdles related to device-level non-idealities and offers poor scalability in large computing tasks. Here, we introduce MELISO+ (In-Memory Linear Solver), a full-stack, distributed framework for energy-efficient in-memory computing. MELISO+ proposes a novel two-tier error correction mechanism to mitigate device non-idealities, and develops a distributed RRAM computing framework to enable matrix computations exceeding dimensions of 65,000 × 65,000. This approach reduces first- and second-order arithmetic errors due to device non-idealities by over 90%, enhances energy efficiency by three to five orders of magnitude, and decreases latency 100-fold. Hence, MELISO+ allows lower-precision RRAM devices to outperform high-precision device alternatives in accuracy, energy and latency metrics. By unifying algorithm-hardware co-design with scalable architecture, MELISO+ considerably advances sustainable, high-dimensional computing suitable for applications like large language models and generative artificial intelligence.