Inquisitive questions -- open-ended, curiosity-driven questions people ask as they read -- are an integral part of discourse processing (Kehler and Rohde, 2017; Onea, 2016) and comprehension (Prince, 2004). Recent work in NLP has taken advantage of question generation capabilities of LLMs to enhance a wide range of applications. But the space of inquisitive questions is vast: many questions can be evoked from a given context. So which of those should be prioritized to find answers? Linguistic theories, unfortunately, have not yet provided an answer to this question. This paper presents QSALIENCE, a salience predictor of inquisitive questions. QSALIENCE is instruction-tuned over our dataset of linguist-annotated salience scores of 1,766 (context, question) pairs. A question scores high on salience if answering it would greatly enhance the understanding of the text (Van Rooy, 2003). We show that highly salient questions are empirically more likely to be answered in the same article, bridging potential questions (Onea, 2016) with Questions Under Discussion (Roberts, 2012). We further validate our findings by showing that answering salient questions is an indicator of summarization qua
Academic dishonesty is met with zero tolerance in higher education, yet plagiarism has become increasingly prevalent in the era of online teaching and learning. Automatic Question Generation (AQG) presents a potential solution to mitigate copying by creating unique questions for each student. Additionally, AQG can provide a vast array of practice questions. Our AQG focuses on generating logical equivalence questions for Discrete Mathematics, a foundational course for first-year computer science students. A literature review reveals that existing AQGs for this type of question generate all propositions that meet user-defined constraints, resulting in inefficiencies and a lack of uniform question difficulty. To address this, we propose a new approach that defines logical equivalence questions using a formal language, translates this language into two sets of generation rules, and develops a linear-time algorithm for question generation. We evaluated our AQG through two experiments. The first involved a group of students completing questions generated by our system. Statistical analysis shows that the accuracy of these questions is comparable to that of textbook questions. The second
The Researchy Questions dataset provides about 100k question queries with complex information needs that require retrieving information about several aspects of a topic. Each query in ResearchyQuestions is associated with sub-questions that were produced by prompting GPT-4. While ResearchyQuestions contains labels indicating what documents were clicked after issuing the query, there are no associations in the dataset between sub-questions and relevant documents. In this work, we augment the Researchy Questions dataset with LLM-judged labels for each sub-question using a Llama3.3 70B model. We intend these sub-question labels to serve as a resource for training retrieval models that better support complex information needs.
Existing techniques for citation recommendation are constrained by their adherence to article contents and metadata. We leverage GPT-4o-mini's latent expertise as an inquisitive assistant by instructing it to ask questions which, when answered, could expose new insights about an excerpt from a scientific article. We evaluate the utility of these questions as retrieval queries, measuring their effectiveness in retrieving and ranking masked target documents. In some cases, generated questions ended up being better queries than extractive keyword queries generated by the same model. We additionally propose MMR-RBO, a variation of Maximal Marginal Relevance (MMR) using Rank-Biased Overlap (RBO) to identify which questions will perform competitively with the keyword baseline. As all question queries yield unique result sets, we contend that there are no stupid questions.
Many open-domain questions are under-specified and thus have multiple possible answers, each of which is correct under a different interpretation of the question. Answering such ambiguous questions is challenging, as it requires retrieving and then reasoning about diverse information from multiple passages. We present a new state-of-the-art for answering ambiguous questions that exploits a database of unambiguous questions generated from Wikipedia. On the challenging ASQA benchmark, which requires generating long-form answers that summarize the multiple answers to an ambiguous question, our method improves performance by 15% (relative improvement) on recall measures and 10% on measures which evaluate disambiguating questions from predicted outputs. Retrieving from the database of generated questions also gives large improvements in diverse passage retrieval (by matching user questions q to passages p indirectly, via questions q' generated from p).
Online learning platforms provide diverse questions to gauge the learners' understanding of different concepts. The repository of questions has to be constantly updated to ensure a diverse pool of questions to conduct assessments for learners. However, it is impossible for the academician to manually skim through the large repository of questions to check for duplicates when onboarding new questions from external sources. Hence, we propose a tool QDup in this paper that can surface near-duplicate and semantically related questions without any supervised data. The proposed tool follows an unsupervised hybrid pipeline of statistical and neural approaches for incorporating different nuances in similarity for the task of question duplicate detection. We demonstrate that QDup can detect near-duplicate questions and also suggest related questions for practice with remarkable accuracy and speed from a large repository of questions. The demo video of the tool can be found at https://www.youtube.com/watch?v=loh0_-7XLW4.
I report the existence of exactly one non-trivial solution to the equation $i(A,B)+i(A, eg B)+i( eg A,B)+i( eg A, eg B)= 0$, where $i(A,B)=\log\frac{P(A\text{ and }B)}{P(A)P(B)}$, and $P(A)$ is the probability of the proposition $A$. The equation specifies an information balance condition between two logical propositions, which is satisfied only by independence and by this new solution. The solution is a new elementary informational relationship between logical propositions, which we denote as $A \sim B$. The $\sim$ relation cannot be expressed as a relationship between probabilities without the use of complex numbers. It can, however, be greatly simplified by expressing each proposition as a combination of a question and an answer, for example, writing, ``All men are mortal'', as (Are all men mortal?, Yes). We will study the mathematics of questions and find out what role the $\sim$ relationship plays inside the algebra. We will find that, like propositions, questions can act on probability distributions. A proposition, $X$, can be given, setting $P(X)$ to 1. The question of $X$ can be raised, setting $P(X)$ to $1/2$. Giving the proposition adds information to the probability dist
The past few years have witnessed the flourishing of crowdsourced medical question answering (Q&A) websites. Patients who have medical information demands tend to post questions about their health conditions on these crowdsourced Q&A websites and get answers from other users. However, we observe that a large portion of new medical questions cannot be answered in time or receive only few answers from these websites. On the other hand, we notice that solved questions have great potential to solve this challenge. Motivated by these, we propose an end-to-end system that can automatically find similar questions for unsolved medical questions. By learning the vector presentation of unsolved questions and their candidate similar questions, the proposed system outputs similar questions according to the similarity between vector representations. Through the vector representation, the similar questions are found at the question level, and the diversity of medical questions expression issue can be addressed. Further, we handle two more important issues, i.e., training data generation issue and efficiency issue, associated with the LSTM training procedure and the retrieval of candidate
Visual Question Answering (VQA) models aim to answer natural language questions about given images. Due to its ability to ask questions that differ from those used when training the model, medical VQA has received substantial attention in recent years. However, existing medical VQA models typically focus on answering questions that refer to an entire image rather than where the relevant content may be located in the image. Consequently, VQA models are limited in their interpretability power and the possibility to probe the model about specific image regions. This paper proposes a novel approach for medical VQA that addresses this limitation by developing a model that can answer questions about image regions while considering the context necessary to answer the questions. Our experimental results demonstrate the effectiveness of our proposed model, outperforming existing methods on three datasets. Our code and data are available at https://github.com/sergiotasconmorales/locvqa.
A basic combinatorial interpretation of Shannon's entropy function is via the "20 questions" game. This cooperative game is played by two players, Alice and Bob: Alice picks a distribution $π$ over the numbers $\{1,\ldots,n\}$, and announces it to Bob. She then chooses a number $x$ according to $π$, and Bob attempts to identify $x$ using as few Yes/No queries as possible, on average. An optimal strategy for the "20 questions" game is given by a Huffman code for $π$: Bob's questions reveal the codeword for $x$ bit by bit. This strategy finds $x$ using fewer than $H(π)+1$ questions on average. However, the questions asked by Bob could be arbitrary. In this paper, we investigate the following question: Are there restricted sets of questions that match the performance of Huffman codes, either exactly or approximately? Our first main result shows that for every distribution $π$, Bob has a strategy that uses only questions of the form "$x < c$?" and "$x = c$?", and uncovers $x$ using at most $H(π)+1$ questions on average, matching the performance of Huffman codes in this sense. We also give a natural set of $O(rn^{1/r})$ questions that achieve a performance of at most $H(π)+r$, and sh
In conversational question answering (CQA), the task of question rewriting~(QR) in context aims to rewrite a context-dependent question into an equivalent self-contained question that gives the same answer. In this paper, we are interested in the robustness of a QR system to questions varying in rewriting hardness or difficulty. Since there is a lack of questions classified based on their rewriting hardness, we first propose a heuristic method to automatically classify questions into subsets of varying hardness, by measuring the discrepancy between a question and its rewrite. To find out what makes questions hard or easy for rewriting, we then conduct a human evaluation to annotate the rewriting hardness of questions. Finally, to enhance the robustness of QR systems to questions of varying hardness, we propose a novel learning framework for QR that first trains a QR model independently on each subset of questions of a certain level of hardness, then combines these QR models as one joint model for inference. Experimental results on two datasets show that our framework improves the overall performance compared to the baselines.
We review some unresolved theoretical issues in three-dimensional two-component Fermi gases, drawing on recent experiments on cold atoms in immaterial traps close to a magnetic Feshbach resonance. We distinguish successively (i) the open questions arising in the few-body problem with Wigner-Bethe-Peierls contact interactions - essentially the stability of the gas with respect to the Efimov effect and the calculation of the cluster (or virial) coefficients, (ii) those arising in the effective low-energy theory of Landau and Khalatnikov quantum hydrodynamics - essentially the damping of phonon modes and the coherence time of the condensate of pairs, and finally (iii) questions requiring a complete, microscopic solution of the many-body problem, such as the specific properties of the acoustic excitation branch (Goldstone) of the condensate of pairs, or its collective excitation branch (Higgs) in the broken-pair continuum.
This paper makes one of the first efforts toward automatically generating complex questions from knowledge graphs. Particularly, we study how to leverage existing simple question datasets for this task, under two separate scenarios: using either sub-questions of the target complex questions, or distantly related pseudo sub-questions when the former are unavailable. First, a competitive base model named CoG2Q is designed to map complex query qraphs to natural language questions. Afterwards, we propose two extension models, namely CoGSub2Q and CoGSub^m2Q, respectively for the above two scenarios. The former encodes and copies from a sub-question, while the latter further scores and aggregates multiple pseudo sub-questions. Experiment results show that the extension models significantly outperform not only base CoG2Q, but also its augmented variant using simple questions as additional training examples. This demonstrates the importance of instance-level connections between simple and corresponding complex questions, which may be underexploited by straightforward data augmentation of CoG2Q that builds model-level connections through learned parameters.
In the distributional Twenty Questions game, Bob chooses a number $x$ from $1$ to $n$ according to a distribution $μ$, and Alice (who knows $μ$) attempts to identify $x$ using Yes/No questions, which Bob answers truthfully. Her goal is to minimize the expected number of questions. The optimal strategy for the Twenty Questions game corresponds to a Huffman code for $μ$, yet this strategy could potentially uses all $2^n$ possible questions. Dagan et al. constructed a set of $1.25^{n+o(n)}$ questions which suffice to construct an optimal strategy for all $μ$, and showed that this number is optimal (up to sub-exponential factors) for infinitely many $n$. We determine the optimal size of such a set of questions for all $n$ (up to sub-exponential factors), answering an open question of Dagan et al. In addition, we generalize the results of Dagan et al. to the $d$-ary setting, obtaining similar results with $1.25$ replaced by $1 + (d-1)/d^{d/(d-1)}$.
Large language models (LLMs) have significantly improved the ability to perform tasks in the field of code generation. However, there is still a gap between LLMs being capable coders and being top-tier software engineers. Based on the observation that toplevel software engineers often ask clarifying questions to reduce ambiguity in both requirements and coding solutions, I argue that the same should be applied to LLMs for code generation tasks. By asking probing questions in various topics before generating the final code, the challenges of programming with LLMs, such as unclear intent specification, lack of computational thinking, and undesired code quality, may be alleviated. This, in turn, increases confidence in the generated code. In this work, I explore how to leverage better communication skills to achieve greater confidence in generated code. I propose a communication-centered process that uses an LLM-generated communicator to identify issues with high ambiguity or low confidence in problem descriptions and generated code. I then ask clarifying questions to obtain responses from users for refining the code.
Answering complex questions involving multiple entities and relations is a challenging task. Logically, the answer to a complex question should be derived by decomposing the complex question into multiple simple sub-questions and then answering those sub-questions. Existing work has followed this strategy but has not attempted to optimize the order of how those sub-questions are answered. As a result, the sub-questions are answered in an arbitrary order, leading to larger search space and a higher risk of missing an answer. In this paper, we propose a novel reinforcement learning(RL) approach to answering complex questions that can learn a policy to dynamically decide which sub-question should be answered at each stage of reasoning. We lever-age the expected value-variance criterion to enable the learned policy to balance between the risk and utility of answering a sub-question. Experiment results show that the RL approach can substantially improve the optimality of ordering the sub-questions, leading to improved accuracy of question answering. The proposed method for learning to order sub-questions is general and can thus be potentially combined with many existing ideas for answer
Question Answering (QA) systems require a large amount of annotated data which is costly and time-consuming to gather. Converting datasets of existing QA benchmarks are challenging due to different formats and complexities. To address these issues, we propose an algorithm to automatically generate shorter questions resembling day-to-day human communication in the Natural Questions (NQ) dataset from longer trivia questions in Quizbowl (QB) dataset by leveraging conversion in style among the datasets. This provides an automated way to generate more data for our QA systems. To ensure quality as well as quantity of data, we detect and remove ill-formed questions using a neural classifier. We demonstrate that in a low resource setting, using the generated data improves the QA performance over the baseline system on both NQ and QB data. Our algorithm improves the scalability of training data while maintaining quality of data for QA systems.
We discuss a study to evaluate the extent to which free-response questions could be approximated by multiple-choice equivalents. Two carefully designed research-based multiple-choice questions were transformed into a free-response format and administered on the final exam in a calculus-based introductory physics course. The original multiple-choice questions were administered in another similar introductory physics course on final exam. Findings suggest that carefully designed multiple-choice questions can reflect the relative performance of the free-response questions while maintaining the benefits of ease of grading and quantitative analysis, especially if the different choices in the multiple-choice questions are weighted to reflect the different levels of understanding that students display.
Question-answering software is becoming increasingly integrated into our daily lives, with prominent examples including Apple Siri and Amazon Alexa. Ensuring the quality of such systems is critical, as incorrect answers could lead to significant harm. Current state-of-the-art testing approaches apply metamorphic relations to existing test datasets, generating test questions based on these relations. However, these methods have two key limitations. First, they often produce unnatural questions that humans are unlikely to ask, reducing the effectiveness of the generated questions in identifying bugs that might occur in real-world scenarios. Second, these questions are generated from pre-existing test datasets, ignoring the broader context and thus limiting the diversity and relevance of the generated questions. In this work, we introduce CQ^2A, a context-driven question generation approach for testing question-answering systems. Specifically, CQ^2A extracts entities and relationships from the context to form ground truth answers, and utilizes large language models to generate questions based on these ground truth answers and the surrounding context. We also propose the consistency ve
Despite rising global usage of large language models (LLMs), their ability to generate long-form answers to culturally specific questions remains unexplored in many languages. To fill this gap, we perform the first study of textual multilingual long-form QA by creating CaLMQA, a dataset of 51.7K culturally specific questions across 23 different languages. We define culturally specific questions as those that refer to concepts unique to one or a few cultures, or have different answers depending on the cultural or regional context. We obtain these questions by crawling naturally-occurring questions from community web forums in high-resource languages, and by hiring native speakers to write questions in under-resourced, rarely-studied languages such as Fijian and Kirundi. Our data collection methodologies are translation-free, enabling the collection of culturally unique questions like "Kuber iki umwami wa mbere w'uburundi yitwa Ntare?" (Kirundi; English translation: "Why was the first king of Burundi called Ntare (Lion)?"). We evaluate factuality, relevance and surface-level quality of LLM-generated long-form answers, finding that (1) for many languages, even the best models make cri