In the dynamic landscape of digital forensics, the integration of Artificial Intelligence (AI) and Machine Learning (ML) stands as a transformative technology, poised to amplify the efficiency and precision of digital forensics investigations. However, the use of ML and AI in digital forensics is still in its nascent stages. As a result, this paper gives a thorough and in-depth analysis that goes beyond a simple survey and review. The goal is to look closely at how AI and ML techniques are used in digital forensics and incident response. This research explores cutting-edge research initiatives that cross domains such as data collection and recovery, the intricate reconstruction of cybercrime timelines, robust big data analysis, pattern recognition, safeguarding the chain of custody, and orchestrating responsive strategies to hacking incidents. This endeavour digs far beneath the surface to unearth the intricate ways AI-driven methodologies are shaping these crucial facets of digital forensics practice. While the promise of AI in digital forensics is evident, the challenges arising from increasing database sizes and evolving criminal tactics necessitate ongoing collaborative researc
Establishing digital twins is a non-trivial endeavour especially when users face significant challenges in creating them from scratch. Ready availability of reusable models, data and tool assets, can help with creation and use of digital twins. A number of digital twin frameworks exist to facilitate creation and use of digital twins. In this paper we propose a digital twin framework to author digital twin assets, create digital twins from reusable assets and make the digital twins available as a service to other users. The proposed framework automates the management of reusable assets, storage, provision of compute infrastructure, communication and monitoring tasks. The users operate at the level of digital twins and delegate rest of the work to the digital twin as a service framework.
Cybercrime and the market for cyber-related compromises are becoming attractive revenue sources for state-sponsored actors, cybercriminals and technical individuals affected by financial hardships. Due to burgeoning cybercrime on new technological frontiers, efforts have been made to assist digital forensic investigators (DFI) and law enforcement agencies (LEA) in their investigative efforts. Forensic tool innovations and ontology developments, such as the Unified Cyber Ontology (UCO) and Cyber-investigation Analysis Standard Expression (CASE), have been proposed to assist DFI and LEA. Although these tools and ontologies are useful, they lack extensive information sharing and tool interoperability features, and the ontologies lack the latest Smart City Infrastructure (SCI) context that was proposed. To mitigate the weaknesses in both solutions and to ensure a safer cyber-physical environment for all, we propose the Smart City Ontological Paradigm Expression (SCOPE), an expansion profile of the UCO and CASE ontology that implements SCI threat models, SCI digital forensic evidence, attack techniques, patterns and classifications from MITRE. We showcase how SCOPE could present complex
As the emerging field of predictive analytics in psychiatry generated and continues to generate massive interest overtime with its major promises to positively change and revolutionize clinical psychiatry, health care and medical professionals are greatly looking forward to its integration and application into psychiatry. However, by directly applying predictive analytics to the practice of psychiatry, this could cause detrimental damage to those that use predictive analytics through creating or worsening existing medical issues. In both cases, medical ethics issues arise, and need to be addressed. This paper will use literature to provide descriptions of selected stages in the treatment of mental disorders and phases in a predictive analytics project, approach mental disorder diagnoses using predictive models that rely on neural networks, analyze the complexities in clinical psychiatry, neural networks and predictive analytics, and conclude with emphasizing and elaborating on limitations and medical ethics issues of applying neural networks and predictive analytics to clinical psychiatry.
The great behavioral heterogeneity observed between individuals with the same psychiatric disorder and even within one individual over time complicates both clinical practice and biomedical research. However, modern technologies are an exciting opportunity to improve behavioral characterization. Existing psychiatry methods that are qualitative or unscalable, such as patient surveys or clinical interviews, can now be collected at a greater capacity and analyzed to produce new quantitative measures. Furthermore, recent capabilities for continuous collection of passive sensor streams, such as phone GPS or smartwatch accelerometer, open avenues of novel questioning that were previously entirely unrealistic. Their temporally dense nature enables a cohesive study of real-time neural and behavioral signals. To develop comprehensive neurobiological models of psychiatric disease, it will be critical to first develop strong methods for behavioral quantification. There is huge potential in what can theoretically be captured by current technologies, but this in itself presents a large computational challenge -- one that will necessitate new data processing tools, new machine learning technique
Education and training in digital forensics requires a variety of suitable challenge corpora containing realistic features including regular wear-and-tear, background noise, and the actual digital traces to be discovered during investigation. Typically, the creation of these challenges requires overly arduous effort on the part of the educator to ensure their viability. Once created, the challenge image needs to be stored and distributed to a class for practical training. This storage and distribution step requires significant time and resources and may not even be possible in an online/distance learning scenario due to the data sizes involved. As part of this paper, we introduce a more capable methodology and system as an alternative to current approaches. EviPlant is a system designed for the efficient creation, manipulation, storage and distribution of challenges for digital forensics education and training. The system relies on the initial distribution of base disk images, i.e., images containing solely base operating systems. In order to create challenges for students, educators can boot the base system, emulate the desired activity and perform a "diffing" of resultant image a
In the framework of digital topology, we study structural and topological properties of digital n-dimensional manifolds. We introduce the notion of simple connectedness of a digital space and prove that if M and N are homotopy equivalent digital spaces and M is simply connected, then so is N. We show that a simply connected digital 2-manifold is the digital 2-sphere and a simply connected digital 3-manifold is the digital 3-sphere. This property can be considered as a digital form of the Poincaré conjecture for continuous three-manifolds.
We present BookReconciler, an open-source tool for enhancing and clustering book data. BookReconciler allows users to take spreadsheets with minimal metadata, such as book title and author, and automatically 1) add authoritative, persistent identifiers like ISBNs 2) and cluster related Expressions and Manifestations of the same Work, e.g., different translations or editions. This enhancement makes it easier to combine related collections and analyze books at scale. The tool is currently designed as an extension for OpenRefine -- a popular software application -- and connects to major bibliographic services including the Library of Congress, VIAF, OCLC, HathiTrust, Google Books, and Wikidata. Our approach prioritizes human judgment. Through an interactive interface, users can manually evaluate matches and define the contours of a Work (e.g., to include translations or not). We evaluate reconciliation performance on datasets of U.S. prize-winning books and contemporary world fiction. BookReconciler achieves near-perfect accuracy for U.S. works but lower performance for global texts, reflecting structural weaknesses in bibliographic infrastructures for non-English and global literatur
In light of the NIMH's Research Domain Criteria (RDoC), the advent of functional neuroimaging, novel technologies and methods provide new opportunities to develop precise and personalized prognosis and diagnosis of mental disorders. Machine learning (ML) and artificial intelligence (AI) technologies are playing an increasingly critical role in the new era of precision psychiatry. Combining ML/AI with neuromodulation technologies can potentially provide explainable solutions in clinical practice and effective therapeutic treatment. Advanced wearable and mobile technologies also call for the new role of ML/AI for digital phenotyping in mobile mental health. In this review, we provide a comprehensive review of the ML methodologies and applications by combining neuroimaging, neuromodulation, and advanced mobile technologies in psychiatry practice. Additionally, we review the role of ML in molecular phenotyping and cross-species biomarker identification in precision psychiatry. We further discuss explainable AI (XAI) and causality testing in a closed-human-in-the-loop manner, and highlight the ML potential in multimedia information extraction and multimodal data fusion. Finally, we disc
We discuss digitization, subsequent digital analysis and processing of negatives (and diapositives) made by Finlay, Thames, Dufay, Paget, and similar additive color screen processes. These early color processes (introduced in the 1890s and popular until the 1950s) used a special color screen filter and a monochromatic negative. Due to poor stability of dyes used to produce color screens many of the photographs appear faded; others exist only in the form of (monochromatic) negatives. We discuss the possibility of digitally reconstructing the original color from scans of original negatives or by virtue of infrared imaging of original transparencies (which eliminates the physically coupled color filters) and digitally recreating the original color filter pattern using a new open-source software tool. Photographs taken using additive color screen processes are some of the very earliest color images of our shared cultural heritage. They depict people, places, and events for which there are no other surviving color images. We hope that our new software tool can bring these images back to life.
Digital Humanities (DH) is an interdisciplinary field that integrates computational methods with humanities scholarship to investigate innovative topics. Each academic discipline follows a unique developmental path shaped by the topics researchers investigate and the methods they employ. With the help of bibliometric analysis, most of previous studies have examined DH across multiple dimensions such as research hotspots, co-author networks, and institutional rankings. However, these studies have often been limited in their ability to provide deep insights into the current state of technological advancements and topic development in DH. As a result, their conclusions tend to remain superficial or lack interpretability in understanding how methods and topics interrelate in the field. To address this gap, this study introduced a new concept of Topic-Method Composition (TMC), which refers to a hybrid knowledge structure generated by the co-occurrence of specific research topics and the corresponding method. Especially by analyzing the interaction between TMCs, we can see more clearly the intersection and integration of digital technology and humanistic subjects in DH. Moreover, this st
Technological advances have enabled multiple countries to consider implementing Smart City Infrastructure to provide in-depth insights into different data points and enhance the lives of citizens. Unfortunately, these new technological implementations also entice adversaries and cybercriminals to execute cyber-attacks and commit criminal acts on these modern infrastructures. Given the borderless nature of cyber attacks, varying levels of understanding of smart city infrastructure and ongoing investigation workloads, law enforcement agencies and investigators would be hard-pressed to respond to these kinds of cybercrime. Without an investigative capability by investigators, these smart infrastructures could become new targets favored by cybercriminals. To address the challenges faced by investigators, we propose a common definition of smart city infrastructure. Based on the definition, we utilize the STRIDE threat modeling methodology and the Microsoft Threat Modeling Tool to identify threats present in the infrastructure and create a threat model which can be further customized or extended by interested parties. Next, we map offences, possible evidence sources and types of threats
With the growing interest in using AI and machine learning (ML) in medicine, there is an increasing number of literature covering the application and ethics of using AI and ML in areas of medicine such as clinical psychiatry. The problem is that there is little literature covering the economic aspects associated with using ML in clinical psychiatry. This study addresses this gap by specifically studying the economic implications of using ML in clinical psychiatry. In this paper, we evaluate the economic implications of using ML in clinical psychiatry through using three problem-oriented case studies, literature on economics, socioeconomic and medical AI, and two types of health economic evaluations. In addition, we provide details on fairness, legal, ethics and other considerations for ML in clinical psychiatry.
In the rapidly evolving field of digital libraries, the development of large language models (LLMs) has opened up new possibilities for simulating user behavior. This innovation addresses the longstanding challenge in digital library research: the scarcity of publicly available datasets on user search patterns due to privacy concerns. In this context, we introduce Agent4DL, a user search behavior simulator specifically designed for digital library environments. Agent4DL generates realistic user profiles and dynamic search sessions that closely mimic actual search strategies, including querying, clicking, and stopping behaviors tailored to specific user profiles. Our simulator's accuracy in replicating real user interactions has been validated through comparisons with real user data. Notably, Agent4DL demonstrates competitive performance compared to existing user search simulators such as SimIIR 2.0, particularly in its ability to generate more diverse and context-aware user behaviors.
Knowledge of regional net primary productivity (NPP) is important for the systematic understanding of the global carbon cycle. In this study, multi-source data were employed to conduct a 33-year regional NPP study in southwest China, at a 1-km scale. A multi-sensor fusion framework was applied to obtain a new normalized difference vegetation index (NDVI) time series from 1982 to 2014, combining the respective advantages of the different remote sensing datasets. As another key parameter for NPP modeling, the total solar radiation was calculated by the improved Yang hybrid model (YHM), using meteorological station data. The verification described in this paper proved the feasibility of all the applied data processes, and a greatly improved accuracy was obtained for the NPP calculated with the final processed NDVI. The spatio-temporal analysis results indicated that 68.07% of the study area showed an increasing NPP trend over the past three decades. Significant heterogeneity was found in the correlation between NPP and precipitation at a monthly scale, specifically, the negative correlation in the growing season and the positive correlation in the dry season. The lagged positive corre
The increasing prevalence of Internet of Things (IoT) devices has made it inevitable that their pertinence to digital forensic investigations will increase into the foreseeable future. These devices produced by various vendors often posses limited standard interfaces for communication, such as USB ports or WiFi/Bluetooth wireless interfaces. Meanwhile, with an increasing mainstream focus on the security and privacy of user data, built-in encryption is becoming commonplace in consumer-level computing devices, and IoT devices are no exception. Under these circumstances, a significant challenge is presented to digital forensic investigations where data from IoT devices needs to be analysed. This work explores the electromagnetic (EM) side-channel analysis literature for the purpose of assisting digital forensic investigations on IoT devices. EM side-channel analysis is a technique where unintentional electromagnetic emissions are used for eavesdropping on the operations and data handling of computing devices. The non-intrusive nature of EM side-channel approaches makes it a viable option to assist digital forensic investigations as these attacks require, and must result in, no modific
Central Bank Digital Currency (CBDC) can be defined as a virtual currency based on node network and digital encryption algorithm issued by a country which has a legal credit protection. CBDCs are supported by Distributed Ledger Technologies (DLTs), and they may allow a universal means of payments for the digital era. There are many ways to proceed, they all require central banks to develop technological expertise. Considering these points, it is important to understand the new IT governance in the financial markets due to CBDC and digital economy. Information Technology is an essential driver that will allow the new financial industry design. This paper has the objective to answer two questions through an updated Systematic Literature Review (SLR). The first question is What IT resources and tools have been considered or applied to set the governance of CBDC adoption? The second; Identify IT governance models in the financial market due to CBDC adoption. Bank for International Settlements (BIS) publications, Scopus and Web of Science were considered as sources of studies. After the strings and including criteria were applied, fourteen papers were analyzed. This paper finds many IT
Immersive virtual reality (VR) emerges as a promising research and clinical tool. However, several studies suggest that VR induced adverse symptoms and effects (VRISE) may undermine the health and safety standards, and the reliability of the scientific results. In the current literature review, the technical reasons for the adverse symptomatology are investigated to provide suggestions and technological knowledge for the implementation of VR head-mounted display (HMD) systems in cognitive neuroscience. The technological systematic literature indicated features pertinent to display, sound, motion tracking, navigation, ergonomic interactions, user experience, and computer hardware that should be considered by the researchers. Subsequently, a meta-analysis of 44 neuroscientific or neuropsychological studies involving VR HMD systems was performed. The meta-analysis of the VR studies demonstrated that new generation HMDs induced significantly less VRISE and marginally fewer dropouts.Importantly, the commercial versions of the new generation HMDs with ergonomic interactions had zero incidents of adverse symptomatology and dropouts. HMDs equivalent to or greater than the commercial versio
The University of Virginia received a grant of $1,000,000 from the Andrew W. Mellon Foundation to enable the Library, in collaboration with Cornell University, to build a digital object repository system based on the Flexible Extensible Digital Object and Repository Architecture (Fedora). The new system demonstrates how distributed digital library architecture can be deployed using web-based technologies, including XML and Web services. The new system is designed to be a foundation upon which interoperable web-based digital libraries can be built. Virginia and collaborating partners in the US and UK will evaluate the system using a diverse set of digital collections. The software will be made available to the public as an open-source release.
Various XML-based approaches aimed at representing compound digital assets have emerged over the last several years. Approaches that are of specific relevance to the digital library community include the Metadata Encoding and Transmission Standard (METS), the IMS Content Packaging XML Binding, and the XML Formatted Data Units (XFDU) developed by CCSDS Panel 2. The MPEG-21 Digital Item Declaration (MPEG-21 DID) is another standard specifying the representation of digital assets in XML that, so far, has received little attention in the digital library community. This article gives a brief insight into the MPEG-21 standardization effort, highlights the major characteristics of the MPEG-21 DID Abstract Model, and describes the MPEG-21 Digital Item Declaration Language (MPEG-21 DIDL), an XML syntax for the representation of digital assets based on the MPEG-21 DID Abstract Model. Also, it briefly demonstrates the potential relevance of MPEG-21 DID to the digital library community by describing its use in the aDORe repository environment at the Research Library of the Los Alamos National Laboratory (LANL) for the representation of digital assets.