In today's landscape, hardware development teams face increasing demands for better quality products, greater innovation, and shorter manufacturing lead times. Despite the need for more efficient and effective processes, hardware designers continue to struggle with a lack of awareness of design changes and other collaborators' actions, a persistent issue in decades of CSCW research. One significant and unaddressed challenge is understanding and managing dependencies between 3D CAD (computer-aided design) models, especially when products can contain thousands of interconnected components. In this two-phase formative study, we explore designers' pain points of CAD dependency management through a thematic analysis of 100 online forum discussions and semi-structured interviews with 10 designers. We identify nine key challenges related to the traceability, navigation, and consistency of CAD dependencies, that harm the effective coordination of hardware development teams. To address these challenges, we propose design goals and necessary features to enhance hardware designers' awareness and management of dependencies, ultimately with the goal of improving collaborative workflows.
Geometric Deep Learning techniques have become a transformative force in the field of Computer-Aided Design (CAD), and have the potential to revolutionize how designers and engineers approach and enhance the design process. By harnessing the power of machine learning-based methods, CAD designers can optimize their workflows, save time and effort while making better informed decisions, and create designs that are both innovative and practical. The ability to process the CAD designs represented by geometric data and to analyze their encoded features enables the identification of similarities among diverse CAD models, the proposition of alternative designs and enhancements, and even the generation of novel design alternatives. This survey offers a comprehensive overview of learning-based methods in computer-aided design across various categories, including similarity analysis and retrieval, 2D and 3D CAD model synthesis, and CAD generation from point clouds, and single/multi-view images. Additionally, it provides a complete list of benchmark datasets and their characteristics, along with open-source codes that have propelled research in this domain. The final discussion delves into th
Security risk assessment is essential in establishing the trustworthiness and reliability of modern systems. While various security risk assessment approaches exist, prevalent applications are "pen and paper" implementations that -- even if performed digitally using computers -- remain prone to authoring mistakes and inconsistencies. Computer-aided design approaches can transform security risk assessments into more rigorous and sustainable efforts. This is of value to both industrial practitioners and researchers, who practice security risk assessments to reflect on systems' designs and to contribute to the discipline's state-of-the-art. In this article, we report the application of a model-based security design tool to reproduce a previously reported security assessment. The main contributions are: 1) an independent attempt to reproduce a refereed article describing a real security risk assessment of a system; 2) comparison of a new computer-aided application with a previous non-computer-aided application, based on a published, real-world case study; 3) a showcase for the potential advantages -- for both practitioners and researchers -- of using computer-aided design approaches to
Online crowdsourcing platforms have made it increasingly easy to perform evaluations of algorithm outputs with survey questions like "which image is better, A or B?", leading to their proliferation in vision and graphics research papers. Results of these studies are often used as quantitative evidence in support of a paper's contributions. On the one hand we argue that, when conducted hastily as an afterthought, such studies lead to an increase of uninformative, and, potentially, misleading conclusions. On the other hand, in these same communities, user research is underutilized in driving project direction and forecasting user needs and reception. We call for increased attention to both the design and reporting of user studies in computer vision and graphics papers towards (1) improved replicability and (2) improved project direction. Together with this call, we offer an overview of methodologies from user experience research (UXR), human-computer interaction (HCI), and applied perception to increase exposure to the available methodologies and best practices. We discuss foundational user research methods (e.g., needfinding) that are presently underutilized in computer vision and g
Climate change and resource depletion demand a shift from the dominant linear "take-make-use-dispose" paradigm of construction toward circular, low-waste practices. Material reuse offers a promising pathway by reducing raw material extraction, mitigating waste, and extending the service lifespan of carbon-sequestering materials such as timber. Realizing this potential, however, requires addressing technical and logistical challenges across both design and construction for accommodating heterogeneous, reclaimed material inventories. This paper presents an integrated framework that couples data-driven computational design with feedback-driven adaptive human-robot collaborative (co-robotic) fabrication and assembly to enable the realization of nonstandard structures made from reclaimed timber of varying length and geometries, supplemented with new off-the-shelf timber when necessary. The framework is validated through Timbrelyn, a built case-study installation that demonstrates how timber reuse can inform and enhance architectural expression. This work contributes to the development of integrated design-to-fabrication workflows that advance adaptive, feedback-driven methods to handle
MindSculpt enables users to generate a wide range of hybrid geometries in Grasshopper in real time simply by thinking about those geometries. This design tool combines a brain-computer interface (BCI) with the parametric design platform Grasshopper, creating an intuitive design workflow that shortens the latency between ideation and implementation compared to traditional computer-aided design tools based on mouse-and-keyboard paradigms. The project arises from transdisciplinary research between neuroscience and architecture, with the goal of building a cyber-human collaborative tool that is capable of leveraging the complex and fluid nature of thinking in the design process. MindSculpt applies a supervised machine-learning approach, based on the support vector machine model (SVM), to identify patterns of brain waves that occur in EEG data when participants mentally rotate four different solid geometries. The researchers tested MindSculpt with participants who had no prior experience in design and found that the tool was enjoyable to use and could contribute to design ideation and artistic endeavors.
This PhD dissertation investigates garbage-free reversible computing systems from abstract design to physical gate-level implementation. Designed in reversible logic, we propose a ripple-block carry adder and work towards a reversible circuit for general multiplication. At a higher-level, abstract designs are proposed for reversible systems, such as a small von Neumann architecture that can execute programs written in a simple reversible two-address instruction set, a novel reversible arithmetic logic unit, and a linear cosine transform. To aid the design of reversible logic circuits we have designed two reversible functional hardware description languages: a linear-typed higher-level language and a gate-level point-free combinator language. We suggest a garbage-free design flow, where circuits are described in the higher-level language and then translated to the combinator language, from which methods to place-and-route of CMOS gates can be applied. We have also made standard cell layouts of the reversible gates in complementary pass-gate CMOS logic and used these to fabricate the ALU design. In total, this dissertation has shown that it is possible to design non-trivial reversibl
This paper presents a survey of ocean simulation and rendering methods in computer graphics. To model and animate the ocean's surface, these methods mainly rely on two main approaches: on the one hand, those which approximate ocean dynamics with parametric, spectral or hybrid models and use empirical laws from oceanographic research. We will see that this type of methods essentially allows the simulation of ocean scenes in the deep water domain, without breaking waves. On the other hand, physically-based methods use Navier-Stokes Equations (NSE) to represent breaking waves and more generally ocean surface near the shore. We also describe ocean rendering methods in computer graphics, with a special interest in the simulation of phenomena such as foam and spray, and light's interaction with the ocean surface.
Microstructures, characterized by intricate structures at the microscopic scale, hold the promise of important disruptions in the field of mechanical engineering due to the superior mechanical properties they offer. One fundamental technique of microstructure design and manufacturing is geometric modeling, which generates the 3D computer models required to run high-level procedures such as simulation, optimization, and process planning. There is, however, a lack of comprehensive discussions on this body of knowledge. The goal of this paper is to compile existing microstructure modeling methods and clarify the challenges, progress, and limitations of current research. It also concludes with future research directions that may improve and/or complement current methods, such as compressive and generative microstructure representations. By doing so, the paper sheds light on what has already been made possible for microstructure modeling, what developments can be expected in the near future, and which topics remain problematic.
To relieve the computational cost of design evaluations using expensive finite element simulations, surrogate models have been widely applied in computer-aided engineering design. Machine learning algorithms (MLAs) have been implemented as surrogate models due to their capability of learning the complex interrelations between the design variables and the response from big datasets. Typically, an MLA regression model contains model parameters and hyperparameters. The model parameters are obtained by fitting the training data. Hyperparameters, which govern the model structures and the training processes, are assigned by users before training. There is a lack of systematic studies on the effect of hyperparameters on the accuracy and robustness of the surrogate model. In this work, we proposed to establish a hyperparameter optimization (HOpt) framework to deepen our understanding of the effect. Four frequently used MLAs, namely Gaussian Process Regression (GPR), Support Vector Machine (SVM), Random Forest Regression (RFR), and Artificial Neural Network (ANN), are tested on four benchmark examples. For each MLA model, the model accuracy and robustness before and after the HOpt are compa
Graphic Design encompasses a wide range of activities from the design of traditional print media (e.g., books and posters) to site-specific (e.g., signage systems) and electronic media (e.g., interfaces). Its practice always explores the new possibilities of information and communication technologies. Therefore, interactivity and participation have become key features in the design process. Even in traditional print media, graphic designers are trying to enhance user experience and exploring new interaction models. Moving posters are an example of this. This type of posters combine the specific features of motion and print worlds in order to produce attractive forms of communication that explore and exploit the potential of digital screens. In our opinion, the next step towards the integration of moving posters with the surroundings, where they operate, is incorporating data from the environment, which also enables the seamless participation of the audience. As such, the adoption of computer vision techniques for moving poster design becomes a natural approach. Following this line of thought, we present a system wherein computer vision techniques are used to shape a moving poster.
Computer-Aided Design (CAD) applications are used in manufacturing to model everything from coffee mugs to sports cars. These programs are complex and require years of training and experience to master. A component of all CAD models particularly difficult to make are the highly structured 2D sketches that lie at the heart of every 3D construction. In this work, we propose a machine learning model capable of automatically generating such sketches. Through this, we pave the way for developing intelligent tools that would help engineers create better designs with less effort. Our method is a combination of a general-purpose language modeling technique alongside an off-the-shelf data serialization protocol. We show that our approach has enough flexibility to accommodate the complexity of the domain and performs well for both unconditional synthesis and image-to-sketch translation.
Sensory illusions - where a sensory stimulus causes people to perceive effects that are altered by a different sensory stimulus - have the potential to enrich mixed-reality based interactions. The well-known colour-temperature illusion is a sensory illusion that causes people to, somewhat counterintuitively, perceive blue objects to feel warmer and red objects to feel colder. There is currently little information about whether this illusion can be recreated in mixed reality (MR). Additionally, it is unknown whether dynamic graphical effects made possible by mixed-reality systems could create a similar or potentially stronger effect to the color-temperature illusion. The results of our study (n=30) support that the color-temperature illusion can be recreated in MR and that dynamic graphics can create a new temperature-sensory illusion. Our dynamic-graphics-temperature illusion creates a stronger effect than the color-temperature illusion and has more intuitive relationship between the stimulus and the effect: cold graphical effects (a virtual ice ball) are perceived as colder and hot graphical effects (a virtual fire ball) as hotter. Our results demonstrate that mixed reality has th
Compute-in-memory (CiM) emerges as a promising solution to solve hardware challenges in artificial intelligence (AI) and the Internet of Things (IoT), particularly addressing the "memory wall" issue. By utilizing nonvolatile memory (NVM) devices in a crossbar structure, CiM efficiently accelerates multiply-accumulate (MAC) computations, the crucial operations in neural networks and other AI models. Among various NVM devices, Ferroelectric FET (FeFET) is particularly appealing for ultra-low-power CiM arrays due to its CMOS compatibility, voltage-driven write/read mechanisms and high ION/IOFF ratio. Moreover, subthreshold-operated FeFETs, which operate at scaling voltages in the subthreshold region, can further minimize the power consumption of CiM array. However, subthreshold-FeFETs are susceptible to temperature drift, resulting in computation accuracy degradation. Existing solutions exhibit weak temperature resilience at larger array size and only support 1-bit. In this paper, we propose TReCiM, an ultra-low-power temperature-resilient multibit 2FeFET-1T CiM design that reliably performs MAC operations in the subthreshold-FeFET region with temperature ranging from 0 to 85 degrees
Recent deep learning methods can generate diverse graphic design layouts efficiently. However, these methods often create layouts with flaws, such as misalignment, unwanted overlaps, and unsatisfied containment. To tackle this issue, we propose an optimization-based method called LayoutRectifier, which gracefully rectifies auto-generated graphic design layouts to reduce these flaws while minimizing deviation from the generated layout. The core of our method is a two-stage optimization. First, we utilize grid systems, which professional designers commonly use to organize elements, to mitigate misalignments through discrete search. Second, we introduce a novel box containment function designed to adjust the positions and sizes of the layout elements, preventing unwanted overlapping and promoting desired containment. We evaluate our method on content-agnostic and content-aware layout generation tasks and achieve better-quality layouts that are more suitable for downstream graphic design tasks. Our method complements learning-based layout generation methods and does not require additional training.
As standard data loading processes, quantum state preparation and block-encoding are critical and necessary processes for quantum computing applications, including quantum machine learning, Hamiltonian simulation, and many others. Yet, existing protocols suffer from poor robustness under device imperfection, thus limiting their practicality for real-world applications. Here, this limitation is overcome based on a fanin process designed in a tree-like bucket-brigade architecture. It suppresses the error propagation between different branches, thus exponentially improving the robustness compared to existing depth-optimal methods. Moreover, the approach here simultaneously achieves the state-of-the-art fault-tolerant circuit depth, gate count, and STA. As an example of application, we show that for quantum simulation of geometrically local Hamiltonian, the code distance of each logic qubit can potentially be reduced exponentially using our technique. We believe that our technique can significantly enhance the power of quantum computing in the near-term and fault-tolerant regimes.
Being able to duplicate published research results is an important process of conducting research whether to build upon these findings or to compare with them. This process is called "replicability" when using the original authors' artifacts (e.g., code), or "reproducibility" otherwise (e.g., re-implementing algorithms). Reproducibility and replicability of research results have gained a lot of interest recently with assessment studies being led in various fields, and they are often seen as a trigger for better result diffusion and transparency. In this work, we assess replicability in Computer Graphics, by evaluating whether the code is available and whether it works properly. As a proxy for this field we compiled, ran and analyzed 151 codes out of 374 papers from 2014, 2016 and 2018 SIGGRAPH conferences. This analysis shows a clear increase in the number of papers with available and operational research codes with a dependency on the subfields, and indicates a correlation between code replicability and citation count. We further provide an interactive tool to explore our results and evaluation data.
We present Piko, a framework for designing, optimizing, and retargeting implementations of graphics pipelines on multiple architectures. Piko programmers express a graphics pipeline by organizing the computation within each stage into spatial bins and specifying a scheduling preference for these bins. Our compiler, Pikoc, compiles this input into an optimized implementation targeted to a massively-parallel GPU or a multicore CPU. Piko manages work granularity in a programmable and flexible manner, allowing programmers to build load-balanced parallel pipeline implementations, to exploit spatial and producer-consumer locality in a pipeline implementation, and to explore tradeoffs between these considerations. We demonstrate that Piko can implement a wide range of pipelines, including rasterization, Reyes, ray tracing, rasterization/ray tracing hybrid, and deferred rendering. Piko allows us to implement efficient graphics pipelines with relative ease and to quickly explore design alternatives by modifying the spatial binning configurations and scheduling preferences for individual stages, all while delivering real-time performance that is within a factor six of state-of-the-art render
We present a computer-aided detection algorithm for polyps in optical colonoscopy images. Polyps are the precursors to colon cancer. In the US alone, more than 14 million optical colonoscopies are performed every year, mostly to screen for polyps. Optical colonoscopy has been shown to have an approximately 25% polyp miss rate due to the convoluted folds and bends present in the colon. In this work, we present an automatic detection algorithm to detect these polyps in the optical colonoscopy images. We use a machine learning algorithm to infer a depth map for a given optical colonoscopy image and then use a detailed pre-built polyp profile to detect and delineate the boundaries of polyps in this given image. We have achieved the best recall of 84.0% and the best specificity value of 83.4%.
While large vision-language models can generate motion graphics animations from text prompts, they regularly fail to include all spatio-temporal properties described in the prompt. We introduce MoVer, a motion verification DSL based on first-order logic that can check spatio-temporal properties of a motion graphics animation. We identify a general set of such properties that people commonly use to describe animations (e.g., the direction and timing of motions, the relative positioning of objects, etc.). We implement these properties as predicates in MoVer and provide an execution engine that can apply a MoVer program to any input SVG-based motion graphics animation. We then demonstrate how MoVer can be used in an LLM-based synthesis and verification pipeline for iteratively refining motion graphics animations. Given a text prompt, our pipeline synthesizes a motion graphics animation and a corresponding MoVer program. Executing the verification program on the animation yields a report of the predicates that failed and the report can be automatically fed back to LLM to iteratively correct the animation. To evaluate our pipeline, we build a synthetic dataset of 5600 text prompts paire