共找到 20 条结果
It is argued that emotions are lawful phe- nomena and thus can be described in terms of a set of laws of emotion. These laws result from the operation of emotion mechanisms that are accessible to intentional control to only a limited extent. The law of situational meaning, the law of concern, the law of reality, the laws of change, habituation and comparative feeling, and the law of hedonic asymmetry are proposed to describe emo- tion elicitation; the law of conservation of emotional mo- mentum formulates emotion persistence, the law of closure expresses the modularity of emotion; and the laws of care for consequence, of lightest load, and of greatest gain per- tain to emotion regulation. For a long time, emotion was an underprivileged area in psychology. It was not regarded as a major area of sci- entific psychological endeavor that seemed to deserve concerted research efforts or receive them. Things have changed over the last 10 or so years. Emotion has become an important domain with a co- herent body of theory and data. It has developed to such an extent that its phenomena can be described in terms of a set of laws, the laws of emotion, that I venture to describe here. Formulating a set of laws of emotion implies not only that the study of emotion has developed sufficiently to do so but also that emotional phenomena are indeed lawful. It implies that emotions emerge, wax, and wane according to rules in strictly determined fashion. To argue this is a secondary objective of this article. Emotions are lawful. When experiencing emotions, people are subject to laws. When filled by emotions, they are manifesting the workings of laws. There is a place for obvious a priori reservations here. Emotions and feelings are often considered the most idiosyncratic of psychological phenomena, and they sug- gest human freedom at its clearest. The mysticism of ineffability and freedom that surrounds emotions may be one reason why the psychology of emotion and feeling has advanced so slowly over the last 100 years. This mys- ticism is largely unfounded, and the freedom of feeling is an illusion. For one thing, the notion of freedom of feeling runs counter to the traditional wisdom that human beings are enslaved by their passions. For another, the laws of emotion may help us to discern that simple, universal, moving forces operate behind the complex, idiosyncratic movements of feeling, in the same way that the erratic path of an ant, to borrow Simon's (1973) well-known parable, manifests the simple structure of a simple ani- mal's mind. The word law may give rise to misunderstanding. When formulating 'laws in this article, I am discussing what are primarily empirical regularities. These regular- ities--or putative regularities--are, however, assumed to rest on underlying Causal mechanisms that generate them. I am suggesting that the laws of emotion are grounded in mechanisms that are not of a voluntary nature and that are only partially under voluntary control. Not only emotions obey the laws; we obey them. We are subject to our emotions, and we cannot engender emotions at will. The laws of emotion that I will discuss are not all equally well established. Not all of them originate in solid evidence, nor are all equally supported by it. To a large extent, in fact, to list the laws of emotion is to list a pro- gram of research. However, the laws provide a coherent picture of emotional responding, which suggests that such a research program might be worthwhile.
The prospect of artificial superintelligence -- AI agents that can generally outperform humans in cognitive tasks and economically valuable activities -- will transform the legal order as we know it. Operating autonomously or under only limited human oversight, AI agents will assume a growing range of roles in the legal system. First, in making consequential decisions and taking real-world actions, AI agents will become de facto subjects of law. Second, to cooperate and compete with other actors (human or non-human), AI agents will harness conventional legal instruments and institutions such as contracts and courts, becoming consumers of law. Third, to the extent AI agents perform the functions of writing, interpreting, and administering law, they will become producers and enforcers of law. These developments, whenever they ultimately occur, will call into question fundamental assumptions in legal theory and doctrine, especially to the extent they ground the legitimacy of legal institutions in their human origins. Attempts to align AI agents with extant human law will also face new challenges as AI agents will not only be a primary target of law, but a core user of law and contribu
In modern markets, many companies offer so-called 'free' services and monetize consumer data they collect through those services. This paper argues that consumer law and data protection law can usefully complement each other. Data protection law can also inform the interpretation of consumer law. Using consumer rights, consumers should be able to challenge excessive collection of their personal data. Consumer organizations have used consumer law to tackle data protection infringements. The interplay of data protection law and consumer protection law provides exciting opportunities for a more integrated vision on 'data consumer law'.
Algorithmic decision-making and similar types of artificial intelligence (AI) may lead to improvements in all sectors of society, but can also have discriminatory effects. While current non-discrimination law offers people some protection, algorithmic decision-making presents the law with several challenges. For instance, algorithms can generate new categories of people based on seemingly innocuous characteristics, such as web browser preference or apartment number, or more complicated categories combining many data points. Such new types of differentiation could evade non-discrimination law, as browser type and house number are not protected characteristics, but such differentiation could still be unfair, for instance if it reinforces social inequality. This paper explores which system of non-discrimination law can best be applied to algorithmic decision-making, considering that algorithms can differentiate on the basis of characteristics that do not correlate with protected grounds of discrimination such as ethnicity or gender. The paper analyses the current loopholes in the protection offered by non-discrimination law and explores the best way for lawmakers to approach algorithm
Amid the surge of intellectual property (IP) disputes surrounding non-fungible tokens (NFTs), some scholars have advocated for the application of personal property or sales law to regulate NFT minting and transactions, contending that IP laws unduly hinder the development of the NFT market. This Article counters these proposals and argues that the existing IP system stands as the most suitable regulatory framework for governing the evolving NFT market. Compared to personal property or sales law, IP laws can more effectively address challenges such as tragedies of the commons and anticommons in the NFT market. NFT communities have also developed their own norms and licensing agreements upon existing IP laws to regulate shared resources. Moreover, the IP regimes, with both static and dynamic institutional designs, can effectively balance various policy concerns, such as innovation, fair competition, and consumer protection, which alternative proposals struggle to provide.
Computer science research sometimes brushes with the law, from red-team exercises that probe the boundaries of authentication mechanisms, to AI research processing copyrighted material, to platform research measuring the behavior of algorithms and users. U.S.-based computer security research is no stranger to the Computer Fraud and Abuse Act (CFAA) and the Digital Millennium Copyright Act (DMCA) in a relationship that is still evolving through case law, research practices, changing policies, and legislation. Amid the landscape computer scientists, lawyers, and policymakers have learned to navigate, anti-fraud laws are a surprisingly under-examined challenge for computer science research. Fraud brings separate issues that are not addressed by the methods for navigating CFAA, DMCA, and Terms of Service that are more familiar in the computer security literature. Although anti-fraud laws have been discussed to a limited extent in older research on phishing attacks, modern computer science researchers are left with little guidance when it comes to navigating issues of deception outside the context of pure laboratory research. In this paper, we analyze and taxonomize the anti-fraud and d
Large Language Models (LLMs) represent a promising frontier for recommender systems, yet their development has been impeded by the absence of predictable scaling laws, which are crucial for guiding research and optimizing resource allocation. We hypothesize that this may be attributed to the inherent noise, bias, and incompleteness of raw user interaction data in prior continual pre-training (CPT) efforts. This paper introduces a novel, layered framework for generating high-quality synthetic data that circumvents such issues by creating a curated, pedagogical curriculum for the LLM. We provide powerful, direct evidence for the utility of our curriculum by showing that standard sequential models trained on our principled synthetic data significantly outperform ($+130\%$ on recall@100 for SasRec) models trained on real data in downstream ranking tasks, demonstrating its superiority for learning generalizable user preference patterns. Building on this, we empirically demonstrate, for the first time, robust power-law scaling for an LLM that is continually pre-trained on our high-quality, recommendation-specific data. Our experiments reveal consistent and predictable perplexity reductio
Since the public release of ChatGPT in November 2022, the AI landscape is undergoing a rapid transformation. Currently, the use of AI chatbots by consumers has largely been limited to image generation or question-answering language models. The next generation of AI systems, AI agents that can plan and execute complex tasks with only limited human involvement, will be capable of a much broader range of actions. In particular, consumers could soon be able to delegate purchasing decisions to AI agents acting as Custobots. Against this background, the Article explores whether EU consumer law, as it currently stands, is ready for the rise of the Custobot Economy. In doing so, the Article makes three contributions. First, it outlines how the advent of AI agents could change the existing e-commerce landscape. Second, it explains how AI agents challenge the premises of a human-centric consumer law which is based on the assumption that consumption decisions are made by humans. Third, the Article presents some initial considerations how a future consumer law could look like that works for both humans and machines.
Investigating serious crimes is inherently complex and resource-constrained. Law enforcement agencies (LEAs) grapple with overwhelming volumes of offender and incident data, making effective suspect identification difficult. Although machine learning (ML)-enabled systems have been explored to support LEAs, several have failed in practice. This highlights the need to align system behavior with stakeholder goals early in development, motivating the use of Goal-Oriented Requirements Engineering (GORE). This paper reports our experience applying the GORE framework KAOS to designing an ML-enabled system for identifying suspects in online child sexual abuse. We describe how KAOS supported early requirements elaboration, including goal refinement, object modeling, agent assignment, and operationalization. A key finding is the central role of data elicitation: data requirements constrain refinement choices and candidate agents while influencing how goals are linked, operationalized, and satisfied. Conversely, goal elaboration and agent assignment shape data quality expectations and collection needs. Our experience highlights the iterative, bidirectional dependencies between goals, data, an
Our society can benefit immensely from algorithmic decision-making and similar types of artificial intelligence. But algorithmic decision-making can also have discriminatory effects. This paper examines that problem, using online price differentiation as an example of algorithmic decision-making. With online price differentiation, a company charges different people different prices for identical products, based on information the company has about those people. The main question in this paper is: to what extent can non-discrimination law protect people against online price differentiation? The paper shows that online price differentiation and algorithmic decision-making could lead to indirect discrimination, for instance harming people with a certain ethnicity. Indirect discrimination occurs when a practice is neutral at first glance, but ends up discriminating against people with a protected characteristic, such as ethnicity. In principle, non-discrimination law prohibits indirect discrimination. The paper also shows, however, that non-discrimination law has flaws when applied to algorithmic decision-making. For instance, algorithmic discrimination can remain hidden: people may no
This article discusses the troubled relationship between contemporary advertising technology (adtech) systems, in particular systems of real-time bidding (RTB, also known as programmatic advertising) underpinning much behavioral targeting on the web and through mobile applications. This article analyzes the extent to which practices of RTB are compatible with the requirements regarding a legal basis for processing, transparency, and security in European data protection law. We first introduce the technologies at play through explaining and analyzing the systems deployed online today. Following that, we turn to the law. Rather than analyze RTB against every provision of the General Data Protection Regulation (GDPR), we consider RTB in the context of the GDPR's requirement of a legal basis for processing and the GDPR's transparency and security requirements. We show, first, that the GDPR requires prior consent of the internet user for RTB, as other legal bases are not appropriate. Second, we show that it is difficult - and perhaps impossible - for website publishers and RTB companies to meet the GDPR's transparency requirements. Third, RTB incentivizes insecure data processing. We co
Recently, Wang et al. [1] reported on an unusual violation of Wiedemann-Franz law in three semimetals. We compare their observations to our observations in a variety of systems, where the apparent WF law violations in the same temperature range arise as a consequence of electron-phonon decoupling. Given the empirical similarity of their data with these cases, the most plausible explanation for the reported violation is an experimental artefact.
Privacy law and regulation have turned to "consent" as the legitimate basis for collecting and processing individuals' data. As governments have rushed to enshrine consent requirements in their privacy laws, such as the California Consumer Privacy Act (CCPA), significant challenges remain in understanding how these legal mandates are operationalized in software. The opaque nature of software development processes further complicates this translation. To address this, we explore the use of Large Language Models (LLMs) in requirements engineering to bridge the gap between legal requirements and technical implementation. This study employs a three-step pipeline that involves using an LLM to classify software use cases for compliance, generating LLM modifications for non-compliant cases, and manually validating these changes against legal standards. Our preliminary findings highlight the potential of LLMs in automating compliance tasks, while also revealing limitations in their reasoning capabilities. By benchmarking LLMs against real-world use cases, this research provides insights into leveraging AI-driven solutions to enhance legal compliance of software.
Large-scale deep learning models are known to memorize parts of the training set. In machine learning theory, memorization is often framed as interpolation or label fitting, and classical results show that this can be achieved when the number of parameters $p$ in the model is larger than the number of training samples $n$. In this work, we consider memorization from the perspective of data reconstruction, demonstrating that this can be achieved when $p$ is larger than $dn$, where $d$ is the dimensionality of the data. More specifically, we show that, in the random features model, when $p \gg dn$, the subspace spanned by the training samples in feature space gives sufficient information to identify the individual samples in input space. Our analysis suggests an optimization method to reconstruct the dataset from the model parameters, and we demonstrate that this method performs well on various architectures (random features, two-layer fully-connected and deep residual networks). Our results reveal a law of data reconstruction, according to which the entire training dataset can be recovered as $p$ exceeds the threshold $dn$.
The integration of artificial intelligence (AI) into telecommunications infrastructure introduces novel risks, such as algorithmic bias and unpredictable system behavior, that fall outside the scope of traditional cybersecurity and data protection frameworks. This paper introduces a precise definition and a detailed typology of telecommunications AI incidents, establishing them as a distinct category of risk that extends beyond conventional cybersecurity and data protection breaches. It argues for their recognition as a distinct regulatory concern. Using India as a case study for jurisdictions that lack a horizontal AI law, the paper analyzes the country's key digital regulations. The analysis reveals that India's existing legal instruments, including the Telecommunications Act, 2023, the CERT-In Rules, and the Digital Personal Data Protection Act, 2023, focus on cybersecurity and data breaches, creating a significant regulatory gap for AI-specific operational incidents, such as performance degradation and algorithmic bias. The paper also examines structural barriers to disclosure and the limitations of existing AI incident repositories. Based on these findings, the paper proposes
This paper addresses a critical gap in legal analytics by developing and applying a novel taxonomy for topic classification of summary judgment cases in the United Kingdom. Using a curated dataset of summary judgment cases, we use the Large Language Model Claude 3 Opus to explore functional topics and trends. We find that Claude 3 Opus correctly classified the topic with an accuracy of 87.13% and an F1 score of 0.87. The analysis reveals distinct patterns in the application of summary judgments across various legal domains. As case law in the United Kingdom is not originally labelled with keywords or a topic filtering option, the findings not only refine our understanding of the thematic underpinnings of summary judgments but also illustrate the potential of combining traditional and AI-driven approaches in legal classification. Therefore, this paper provides a new and general taxonomy for UK law. The implications of this work serve as a foundation for further research and policy discussions in the field of judicial administration and computational legal research methodologies.
Using formal renormalization theory, Yakhot derived in ([32], 1988) an $O\left(\frac{A}{\sqrt{\log A}}\right)$ growth law of the turbulent flame speed with respect to large flow intensity $A$ based on the inviscid G-equation. Although this growth law is widely cited in combustion literature, there has been no rigorous mathematical discussion to date about its validity. As a first step towards unveiling the mystery, we prove that there is no intermediate growth law between $O\left(\frac{A}{\log A}\right)$ and $O(A)$ for two dimensional incompressible Lipschitz continuous periodic flows with bounded swirl sizes. In particular, we do not assume the non-degeneracy of critical points. Additionally, other examples of flows with lower regularity, Lagrangian chaos, and related phenomena are also discussed.
The effectiveness of Large Language Models (LLMs) in legal reasoning is often limited due to the unique legal terminologies and the necessity for highly specialized knowledge. These limitations highlight the need for high-quality data tailored for complex legal reasoning tasks. This paper introduces LegalSemi, a benchmark specifically curated for legal scenario analysis. LegalSemi comprises 54 legal scenarios, each rigorously annotated by legal experts, based on the comprehensive IRAC (Issue, Rule, Application, Conclusion) framework from Malaysian Contract Law. In addition, LegalSemi is accompanied by a structured knowledge base (SKE). A series of experiments were conducted to assess the usefulness of LegalSemi for IRAC analysis. The experimental results demonstrate the effectiveness of incorporating the SKE for issue identification, rule retrieval, application and conclusion generation using four different LLMs.
Whether the first law of black hole mechanics is correct is an important question in black holes physics. Subjected to current limited gravitational wave events, we propose its weaker version that permits a relatively large perturbation to a black hole system and implement a simple test with the first event GW150914. Confronting the strain data with the theory, we obtain the constraint on the deviation parameter $α=0.07\pm0.11$, which indicates that this weaker version is valid at the 68\% confidence level. This result implies that the first law of black hole mechanics may be correct.