Search
| 2026
Accelerating Social Science Research via Agentic Hypothesization and Experimentation
Jishu Sen Gupta, Harini SI, Somesh Kumar Singh, Syed Mohamad Tawseeq, Yaman Kumar Singla, David Doermann, Rajiv Ratn Shah, Balaji Krishnamurthy
Data-driven social science research is inherently slow, relying on iterative cycles of observation, hypothesis generation, and experimental validation. While recent data-driven methods promise to accelerate parts of this process, they largely fail to support end-to-end scientific discovery. To address this gap, we introduce EXPERIGEN, an agentic framework that operationalizes end-to-end discovery through a Bayesian optimization inspired two-phase search, in which a Generator proposes candidate hypotheses and an Experimenter evaluates them empirically. Across multiple domains, EXPERIGEN consistently discovers 2-4x more statistically significant hypotheses that are 7-17 percent more predictive than prior approaches, and naturally extends to complex data regimes including multimodal and relational datasets. Beyond statistical performance, hypotheses must be novel, empirically grounded, and actionable to drive real scientific progress. To evaluate these qualities, we conduct an expert review of machine-generated hypotheses, collecting feedback from senior faculty. Among 25 reviewed hypotheses, 88 percent were rated moderately or strongly novel, 70 percent were deemed impactful and worth pursuing, and most demonstrated rigor comparable to senior graduate-level research. Finally, recognizing that ultimate validation requires real-world evidence, we conduct the first A/B test of LLM-generated hypotheses, observing statistically significant results with p less than 1e-6 and a large effect size of 344 percent.
Anti-quorum, anti biofilm activity of FDA approved drugs against P. aeruginosa: in silico and in vitro studies
Anmol Srivastava, Vishnu Agarwal, Vivek Kumar
P. aeruginosa is an opportunistic pathogen that causes various nosocomial infections. The ability of P. aeruginosa to form biofilms is one of the main factors contributing to its pathogenicity. Due to biofilm formation, bacteria get embedded in it and is able to withstand extreme environmental conditions like chemicals, UV, temperature, pH, salinity, and antibiotics. Biofilm formation is an important virulence factor associated with quorum sensing (QS), which is a cell-to-cell communication system that is influenced by cell density.
Burkholderia pseudomallei quorum sensing molecule 3-hydroxy-C10 HSL, triggers organelle stress and inflammatory responses in A549 cell line
Anmol Srivastava, Nidhi Verma, Vishnu Agarwal
Burkholderia pseudomallei, the causative agent of melioidosis, is a recognised bioterrorism threat. This microorganism produces a key quorum molecule, 3-Hydroxy-C10 homoserine lactone (3-OH-C10 HSL), which has shown to modulate host immune responses. This study investigated the impact of 3-Hydroxy-C10 HSL on A549 cell line, with a focus on organelle stress and inflammatory responses. Treatment with 3-Hydroxy-C10 HSL (100 μM, 2 h) induces a significant elevation of cytosolic calcium and endoplasmic reticulum (ER) stress, evidenced by BiP upregulation and activation of the PERK-CHOP axis, indicating activation of the unfolded protein response (UPR). Mitochondrial function was compromised, as shown by reduced ATP production, loss of mitochondrial membrane potential (MMP), and elevated mitochondrial ROS generation. Furthermore, lysosomal dysfunction was observed through decreased.
In‐Silico Characterisation of Burkholderia pseudomallei K96243 Pathogenic Islands: Unveiling Novel Targets for Therapeutic Development
Anmol Srivastava, Nidhi Verma, Vishnu Agarwal, Shubham Sharma
Burkholderia pseudomallei is a deadly bacterium responsible for melioidosis, which is challenging to treat because of its antibiotic resistance and ability to evade the immune response. This study is focused on in silico analysis to identify novel drug targets. The B. pseudomallei K96243 strain, we have identified seven pathogenicity islands with 138 genes. Subsequent filtering based on criteria such as the data present, essentiality, lack of human homology, and uniqueness narrowed this to 24 promising targets, with eight top candidates. These include proteins involved in energy production (ctaB, BPSL1454, BPSS0086, petB, BPSL1260 and BPSL1259), immune evasion (BPSL1655 and BPSS1780), and the porin efflux pump. Computational interaction analysis revealed a connection between these targets and human immune and respiratory pathways. Prominently, all eight potential drug target candidates showed no homology to human proteins, highlighting their promising role in drug development against melioidosis. This work provides an important framework for identifying novel therapeutics and vaccines against B. pseudomallei.
LittiChoQA: Literary Texts in Indic Languages Chosen for Question Answering
Aarya Khandelwal, Ritwik Mishra, Rajiv Ratn Shah
Long-context question answering (QA) over literary texts poses significant challenges for modern large language models, particularly in low-resource languages. We address the scarcity of long-context QA resources for Indic languages by introducing LittiChoQA, the largest literary QA dataset to date covering many languages spoken in the Gangetic plains of India. The dataset comprises over 270K automatically generated question-answer pairs with a balanced distribution of factoid and non-factoid questions, generated from naturally authored literary texts collected from the open web. We evaluate multiple multilingual LLMs on non-factoid, abstractive QA, under both full-context and context-shortened settings. Results demonstrate a clear trade-off between performance and efficiency: full-context fine-tuning yields the highest token-level and semantic-level scores, while context shortening substantially improves throughput. Among the evaluated models, Krutrim-2 achieves the strongest performance, obtaining a semantic score of 76.1 with full context. While, in shortened context settings it scores 74.9 with answer paragraph selection and 71.4 with vector-based retrieval. Qualitative evaluations further corroborate these findings.
| 2025
A visuo-haptic extended reality–based training system for hands-on manual metal arc welding training
Kalpana Shankhwar, Tung-Jui Chuang, Yao-Yang Tsai, Shana Smith
Welding training has been an important job training process in the industry and usually demands a large amount of resources. In real practice, the strong magnetic force and intense heat during the welding processes often frighten novice welders. In order to provide safe and effective welding training, this study developed a visuo-haptic extended reality (VHXR)–based hands-on welding training system for training novice welders to perform a real welding task. Novice welders could use the VHXR-based system to perform a hands-on manual arc welding task, without exposure to high temperature and intense ultraviolet radiation. Real-time and realistic force and visual feedback are provided to help trainees to maintain a constant arc length, travel speed, and electrode angle. Compared to the traditional video training, users trained using the VHXR-based welding training system significantly demonstrated better performance in real welding tasks. Trainees were able to produce better-quality joints by performing smoother welding with less mistakes, inquiry times, and hints.
A Novel Method Based on Hybridization of Generative Adversarial Imputation Nets and SDAE-Kriging for RUL Prediction of Lithium-Ion Battery in Scenarios of Missing and …
Wei Li, Yongsheng Li , Ningbo Wang, Akhil Garg, Liang Gao, Bibaswan Bose, Kalpana Shankhwar
Lithium-ion batteries (LIBs) have received enormous attention as the core components of Electric vehicles (EVs). An unavoidable issue is that battery performance will continue to degrade as materials age and cycle time increases. Accurately predicting the Remaining useful life (RUL) of LIBs is an important prerequisite to ensure the safe driving of EVs. However, the actual battery management system may encounter sensor or communication system failures, resulting in missing or incomplete data, which will result in inaccurate battery RUL predictions. This article presents a novel method based on hybridization of Generative adversarial imputation nets (GAIN) and Stacked denoised autoencoder with Kriging (SDAE-Kriging) for the prediction of RUL of LIBs in scenarios of missing and incomplete data. In the proposed method, the GAIN is leveraged to realize the filling of missing and incomplete data. The SDAE
ARoma: Augmented Reality Olfactory Menu Application
Aarav Balachandran, Kritika Gupta, Prajna Vohra, Anmol Srivastava
This study aims to explore the integration of Augmented Reality (AR) and olfactory technology to enhance dining experience in restaurants. We present ARoma, an innovative AR olfactory menu application for Indian cuisine which provides users with 3D visualisation of dishes, detailed ingredient and nutritional information, and historical context, as well as an olfaction device to deliver the aroma of the dishes. Our research compares the traditional menu experience with the AR menu and ARoma, aiming to understand how these technologies affect customers’ perceptions of food quality, dining enjoyment, and immersion. Our user study involved a sample size of 30 participants, divided into two groups. Group A compared traditional menu experiences with AR menus, while Group B experienced traditional menus followed by ARoma. Using this control group study and mixed-method approach, including quantitative surveys and qualitative interviews, we found that AR menus significantly enhance the dining experience by providing detailed and engaging information. Our findings suggest that AR and olfactory technology can significantly improve customer satisfaction and engagement in the food industry.
An interactive extended reality-based tutorial system for fundamental manual metal arc welding training
Kalpana Shankhwar, Shana Smith
Extended reality (XR) technology has been proven an effective human–computer interaction tool to increase the perception of presence. The purpose of this study is to develop an interactive XR-based welding tutorial system to enhance the learning and hands-on skills of novice welders. This study is comprised of two parts: (1) fundamental manual metal arc welding (MMAW) science and technology tutoring in a virtual reality (VR)-based environment, and (2) hands-on welding training in a mixed reality (MR)-based environment. Using the developed tutorial system, complicated welding process and the effects of welding process parameters on weld bead geometry can be clearly observed and comprehended by using a 3D interactive user interface. Visual aids and quantitative guidance are displayed in real time to guide novice welders through the correct welding procedure and help them to maintain a proper welding position. A user study was conducted to evaluate the learnability, workload, and usability of the system. Results show that users obtained significantly better performance by using the XR-based welding tutorial system, compared to those who were trained using the conventional classroom training method.
Boosting Automated Urine Sediment Classification via Data Scaling
Satyendra Yadav, Isha Saini, Vidushi Sharma, Rajiv Ratn Shah
Urine sediment analysis plays a critical role in diagnosing the kidney related diseases, urinary tract infections and many related disorders. Historically, the urine sediment microscopic image samples were manually examined under a microscope by the clinical experts and trained professionals. This traditional manual analysis is slow, time consuming, labor intensive and prone to variability and errors. In recent times, computer vision models have been extensively used in automated urine sediment analysis but they often struggle in accurately locating and identifying smaller and irregular shaped urine sediments. In order to overcome these challenges we have worked with two distinct approaches to improve the performance of computer vision models in urine sediment analysis. In this work, we have used two state-of-the-art computer vision models YOLOv12 and RT-DETR in our experimental results on the publicly
Information Extraction for the SAPPhIRE Model of Causality Using Dependency Parsing, Lexical Database, and Rules
Sonal Keshwani, Amaresh Chakrabarti, Kausik Bhattacharya, V Srinivasan
The representation of design information through ontologies has proven to be effective in fostering creative ideation within product design. Consequently, researchers have developed databases comprising models of engineering and biological systems by leveraging ontologies. However, the manual construction of a large number of models from technical documents is an effort-intensive task that demands specialized expertise. To address this challenge, researchers have investigated automatic information extraction methods utilizing data-intensive machine-learning models. However, previous research has not fully documented the end-to-end process of information extraction and representation and has not reported the end-to-end accuracy. This study introduces a novel method for automatic information extraction pertinent to the State change-Action-Part-Phenomenon-Input-oRgan-Effect (SAPPhIRE) model of causality alternative to creating data-intensive machine-learning models. This method employs the dependency parsing technique of natural language processing, along with rules supported by a lexical database, to extract words relevant to the SAPPhIRE model. Unlike previous approaches that rely on supervised learning methods, this new technique does not require extensive datasets for the training and validation of machine-learning models. Furthermore, it reports the end-to-end accuracy of information extraction, rather than focusing solely on the word classification task, which is preceded by manual pre-processing in prior research. The results of this newly developed method have been validated against SAPPhIRE models reported in the literature and through input provided by SAPPhIRE specialists and design researchers.
| 2024
Effecti-Net: A Multimodal Framework and Database for Educational Content Effectiveness Analysis
Jainendra Shukla, Deep Dwivedi, Ritik Garg, Shiva Baghel, Rushil Thareja, Ritvik Kulshrestha, Mukesh Mohania
Amid the evolving landscape of education, evaluating the impact of educational video content on students remains a challenge. Existing methods for assessment often rely on heuristics and self-reporting, leaving room for subjectivity and limited insight. This study addresses this issue by leveraging physiological sensor data to predict student-perceived content effectiveness. Within the realm of educational content evaluation, prior studies focused on conventional approaches, leaving a gap in understanding the nuanced responses of students to educational materials. To bridge this gap, our research introduces a novel perspective, building upon previous work in multimodal physiological data analysis. Our primary contributions encompass two key elements. First, we present the ’Effecti-Net’ architecture, a sophisticated deep learning model that integrates data from multiple sensor modalities, including Electroencephalogram (EEG), Eye Tracker, Galvanic Skin Response (GSR), and Photoplethysmography (PPG). Second, we introduce the ’DECEP’ dataset, a repository comprising 597 minutes of multimodal sensor data. To assess the effectiveness of our approach, we benchmark it against conventional methods. Remarkably, our model achieves a lowest MSE of 0.1651 and MAE of 0.3544 on the DECEP dataset. It offers educators and content creators a comprehensive framework that promotes the development of more engaging educational content.
InMDb: Indian Movie Database for Emotion Analysis
Jainendra Shukla, Ritik Garg, Rushil Thareja, Manak Bisht, Manavjeet Singh, Sarthak Arora
Cinematic experiences, characterized by intricate audio-visual stimuli, foster profound emotional engagement. However, the correlation between audience emotions, physiological responses, film genres, and ratings, particularly in the underexplored Bollywood context, remains largely uncharted. Understanding this intricate interplay can provide filmmakers valuable insights for content adaptation. Addressing this research gap, we introduce "InMDB: Indian Movie DataBase," a comprehensive multimodal dataset that examines emotional responses elicited by Bollywood trailers, using both self-reported measures and physiological data. Our meticulous statistical analysis of the dataset deepens the understanding of how emotions and their subsequent physiological responses correlate with, and potentially influence, film ratings and categories, offering novel insights into emotional engagement in the cinematic context.
Inclusive Medicine Packaging for the Geriatric Population: Bridging Accessibility Gaps
Mrishika Kannan Nair, Richa Gupta
The geriatric population is the largest and most consistent consumer of medications [6]. Age-related changes impacting visual and tactile acuity pose barriers to effective medication management. The primary reason for this, is the neglect of inclusive and accessible design practices in medicine strips. This research uncovers the exclusionary design of medication packaging and emphasises the imperative shift towards a more inclusive design. A mixed-method study was employed to understand the major physical and cognitive challenges faced by the elderly in medication management. Amongst the different design interventions explored, augmented reality QR tags emerged as a versatile solution, offering easy, magnified, text-to-speech content on mobile devices. To validate the proposed prototype and approach, an experiment was conducted. Our design reduced task completion time, minimised the chances of medication errors and reduced the reliance on assistance. The qualitative interview post-experiment revealed enhanced user satisfaction and ease of use. This research has illuminated the possibilities for enhancing healthcare accessibility and medication management through the thoughtful integration of technology into medicine strip design. By offering a more inclusive and user-friendly approach, the study bridges the accessibility gap, empowering individuals of all ages and abilities to manage their medications safely and effectively.
KaavadBits: Exploring Tangible Interactive Storytelling of Branching Narratives through a Kaavad-inspired Installation
Anmol Srivastava, Saumik Shashwat, Aditya Padmagirwar, Shivoy Arora
This work explores branching narratives through KaavadBits, a tabletop art installation embodying the kaavadiya-jajmaan or the narrator-patron perspective of kaavad baanchana, the Indian storytelling tradition of reciting the kaavad. For diegetic worldbuilding of tales from Panchatantra, a compilation of ancient Indian animal fables, the narrator takes the physical form of a tree, with which the audience interacts using tokens for a seamless multi-modal storytelling experience. Building on the related explorations, we propose a novel design that immerses the audience through choice, character and question-based interactions. We discuss the insights from a pilot user study and directions for future work. Through this paper, we aim to strike consequential tangible, technological and narrative explorations into various lesser-known traditional forms of storytelling that may inspire new interaction techniques, ultimately preserving the intangible heritages.
| 2023
A Rapid Scoping Review and Conceptual Analysis of the Educational Metaverse in the Global South: Socio-Technical Perspectives
Anmol Srivastava
This paper presents a conceptual insight into the Design of the Metaverse to facilitate educational transformation in selected developing nations within the Global South regions, e.g., India. These regions are often afflicted with socio-economic challenges but rich in cultural diversity. By utilizing a socio-technical design approach, this study explores the specific needs and opportunities presented by these diverse settings. A rapid scoping review of the scant existing literature is conducted to provide fundamental insights. A novel design methodology was formulated that utilized ChatGPT for ideation, brainstorming, and literature survey query generation. This paper aims not only to shed light on the educational possibilities enabled by the Metaverse but also to highlight design considerations unique to the Global South.
A Video Is Worth 4096 Tokens: Verbalize Videos To Understand Them In Zero Shot
Rajiv R Shah, Aanisha Bhattacharya, Yaman K Singla, Balaji Krishnamurthy, Changyou Chen
Multimedia content, such as advertisements and story videos, exhibit a rich blend of creativity and multiple modalities. They incorporate elements like text, visuals, audio, and storytelling techniques, employing devices like emotions, symbolism, and slogans to convey meaning. There is a dearth of large annotated training datasets in the multimedia domain hindering the development of supervised learning models with satisfactory performance for real-world applications. On the other hand, the rise of large language models (LLMs) has witnessed remarkable zero-shot performance in various natural language processing (NLP) tasks, such as emotion classification, question-answering, and topic classification. To leverage such advanced techniques to bridge this performance gap in multimedia understanding, we propose verbalizing long videos to generate their descriptions in natural language, followed by performing video-understanding tasks on the generated story as opposed to the original video. Through extensive experiments on fifteen video-understanding tasks, we demonstrate that our method, despite being zero-shot, achieves significantly better results than supervised baselines for video understanding. Furthermore, to alleviate a lack of story understanding benchmarks, we publicly release the first dataset on a crucial task in computational social science on persuasion strategy identification.
An Analysis of Physiological and Psychological Responses in Virtual Reality and Flat Screen Gaming
Jainendra Shukla, Ritik Vatsal, Shrivatsa Mishra, Rushil Thareja, Mrinmoy Chakrabarty, Ojaswa Sharma
Recent research has focused on the effectiveness of Virtual Reality (VR) in games as a more immersive method of interaction. However, there is a lack of robust analysis of the physiological effects between VR and flatscreen (FS) gaming. This paper introduces the first systematic comparison and analysis of emotional and physiological responses to commercially available games in VR and FS environments. To elicit these responses, we first selected four games through a pilot study of 6 participants to cover all four quadrants of the valence-arousal space. Using these games, we recorded the physiological activity, including Blood Volume Pulse and Electrodermal Activity, and self-reported emotions of 33 participants in a user study. Our data analysis revealed that VR gaming elicited more pronounced emotions, higher arousal, increased cognitive load and stress, and lower dominance than FS gaming. The Virtual Reality and Flat Screen (VRFS) dataset, containing over 15 hours of multimodal data comparing FS and VR gaming across different games, is also made publicly available for research purposes. Our analysis provides valuable insights for further investigations into the physiological and emotional effects of VR and FS gaming.
An EEG-Based Computational Model for Decoding Emotional Intelligence,Personality, and Emotions
Jainendra Shukla, K. Kannadasan, Sridevi Veerasingam, B. Shameedha Begum, N. Ramasubramanian
Emotional intelligence (EI), a critical aspect of regulating emotions and behavior in daily life, holds paramount significance in both psychology research and real-world applications. Understanding and assessing EI are essential for informed decision-making, nurturing relationships, and facilitating efficient communication. As human–computer interaction (HCI) continues to evolve, there is a growing need to develop systems capable of comprehending human emotions, personality traits, and moods through recognition models. This research endeavors to explore the potential of recognizing EI in the context of effective HCI. To address this challenge, we have developed a novel computational model based on electroencephalogram (EEG) data. Our work encompasses a carefully curated EEG dataset, featuring recordings from 40 participants who were exposed to a set of 16 emotional video clips selected from distinct quadrants of the valence-arousal (VA) space. Participants’ emotional responses were meticulously annotated through self-assessment of emotional dimensions for each video stimulus. In addition, participants’ feedback on the big-five personality traits and their responses to the trait emotional intelligence questionnaire (TEIQue) served as our ground truth for further analysis. Our study includes a comprehensive correlation analysis, using Pearson correlations to establish the relationships between personality traits and EI. Furthermore, we conducted EEG-based analysis to uncover connections between EEG signals and emotional attributes. Remarkably, our analysis reveals that EEG signals excel at capturing differences in EI levels. Leveraging machine learning algorithms, we have constructed binary classification models that yield average $F1$ scores of 0.72, 0.71, and 0.62 for emotions, personality traits, and EI, respectively. These experimental outcomes underscore the potential of EEG signals in the recognition of EI, personality traits, and emotions. We envision our proposed model as a foundational element in the development of effective HCI systems, enabling a deeper and better understanding of human behavior.
AttentioNet: Monitoring Student Attention Type in Learning with EEG-Based Measurement System
Jainendra Shukla, Dhruv Verma, Sejal Bhalla, S. V. Sai Santosh, Saumya Yadav, Aman Parnami
Student attention is an indispensable input for uncovering their goals, intentions, and interests, which prove to be invaluable for a multitude of research areas, ranging from psychology to interactive systems. However, most existing methods to classify attention fail to model its complex nature. To bridge this gap, we propose AttentioNet, a novel Convolutional Neural Network-based approach that utilizes Electroencephalography (EEG) data to classify attention into five states: Selective, Sustained, Divided, Alternating, and relaxed state. We collected a dataset of 20 subjects through standard neuropsychological tasks to elicit different attentional states. The average across-student accuracy of our proposed model at this configuration is 92.3% (SD=3.04), which is well-suited for end-user applications. Our transfer learning-based approach for personalizing the model to individual subjects effectively addresses the issue of individual variability in EEG signals, resulting in improved performance and adaptability of the model for real-world applications. This represents a significant advancement in the field of EEG-based classification. Experimental results demonstrate that AttentioNet outperforms a popular EEGnet baseline (p-value < 0.05) in both subject-independent and subject-dependent settings, confirming the effectiveness of our proposed approach despite the limitations of our dataset. These results highlight the promising potential of AttentioNet for attention classification using EEG data.
Emotionally Enhanced Talking Face Generation
Rajiv R Shah, Sahil Goyal, Sarthak Bhagat, Shagun Uppal, Hitkul Jangra, Yi Yu, Yifang Yin
Several works have developed end-to-end pipelines for generating lip-synced talking faces with real-world applications, such as teaching and language translation in videos. However, these prior works fail to create realistic-looking videos since they focus little on people's expressions and emotions. Moreover, these methods' effectiveness largely depends on the faces in the training dataset, which means they may not perform well on unseen faces. To mitigate this, we build a talking face generation framework conditioned on a categorical emotion to generate videos with appropriate expressions, making them more realistic and convincing. With a broad range of six emotions, i.e., happiness, sadness, fear, anger, disgust, and neutral, we show that our model can adapt to arbitrary identities, emotions, and languages. Our proposed framework has a user-friendly web interface with a real-time experience for talking face generation with emotions. We also conduct a user study for subjective evaluation of our interface's usability, design, and functionality.
EngageMe: Assessing Student Engagement in Online Learning Environment Using Neuropsychological Tests
Jainendra Shukla, Saumya Yadav, Momin Naushad Siddiqui
In the proposed research, we investigated whether the standardized neuropsychological tests commonly used to assess attention can be used to measure students’ engagement in online learning settings. Accordingly, we employed 73 students in three clinically relevant neuropsychological tests to assess three types of attention. Students’ engagement performance, as evidenced by their facial video, was also annotated by three independent annotators. The manual annotations observed a high level of inter-annotator reliability (Krippendorffs’ Alpha of 0.864). Further, by obtaining a correlation value of 0.673 (Spearmans’ Rank Correlation) between manual annotation and neuropsychological tests score, our results show construct validity to prove neuropsychological test scores’ significance as a latent variable for measuring students’ engagement. Finally, using non-intrusive behavioral cues, including facial action unit and eye gaze data collected via webcam, we propose a machine learning method for engagement analysis in online learning settings, achieving a low mean squared error value (0.022). The findings suggest a neuropsychological test-based machine learning technique could effectively assess students’ engagement in online education.
Finite element analysis results visualization of manual metal arc welding using an interactive mixed reality-based user interface
Kalpana Shankhwar, Shana Smith
Welding is extensively used in manufacturing industries for various applications. However, residual stress is induced due to the non-uniform temperature distribution on the weld plates during the welding process, which significantly affects the fatigue strength. In addition, the non-uniform expansion and contraction of the weld and surrounding base metal cause structural distortion. The distortion affects final product quality and results in lower productivity. Therefore, the structural analysis of the welded component is significantly important. In this work, a mixed reality (MR)-based user interface was developed to overlay the finite element analysis (FEA) results on the real weld plates in real time for manual metal arc welding (MMAW). Since the numerical simulation using FEA requires a large number of computational resources, a gradient boosted regression tree (GBRT) model was trained to predict the residual stress and deformation results. Furthermore, a lookup table and a trilinear interpolation method were used to render the results based on users' input data using Microsoft HoloLens 2 in real time. The developed interactive MR-based user interface can help welders quickly predict and control the residual stress and welding distortion before the real welding process and help novices learn the relationship between the welding parameters and the induced residual stress and deformation.
Hindi Chatbot for Supporting Maternal and Child Health Related Queries in Rural India
Rajiv R Shah, Ritwik Mishra, Simranjeet Singh, Jasmeet Kaur, Pushpendra Singh
In developing countries like India, doctors and healthcare professionals working in public health spend significant time answering health queries that are fact-based and repetitive. Therefore, we propose an automated way to answer maternal and child health-related queries. A database of Frequently Asked Questions (FAQs) and their corresponding answers generated by experts is curated from rural health workers and young mothers. We develop a Hindi chatbot that identifies k relevant Question and Answer (QnA) pairs from the database in response to a healthcare query (q) written in Devnagri script or Hindi-English (Hinglish) code-mixed script. The curated database covers 80% of all the queries that a user of our study is likely to ask. We experimented with (i) rule-based methods, (ii) sentence embeddings, and (iii) a paraphrasing classifier, to calculate the q-Q similarity. We observed that paraphrasing classifier gives the best result when trained first on an open-domain text and then on the healthcare domain. Our chatbot uses an ensemble of all three approaches. We observed that if a given q can be answered using the database, then our chatbot can provide at least one relevant QnA pair among its top three suggestions for up to 70% of the queries.
| 2022
A review of heat source and resulting temperature distribution in arc welding
Kalpana Shankhwar, Ankit Das, Arvind Kumar, Nenad Gubeljak
Thermal analysis is one of the cardinal studies essential for arc welding processes. Thermal field and temperature distribution in arc welds affect the quality of welds as they govern the microstructural and thermo-mechanical properties. Therefore, thorough understanding of the thermal behaviour in arc welds is an absolute necessity. Significant efforts have been made in the past to determine the temperature field associated with arc welding. However, for accurate determination of the temperature field/distribution, it is necessary to understand the heat source which influences the temperature distribution in welds. Rosenthal reported the first concept of modelling the heat source, which was then improvised and new models have been instituted through the years. This review article summarizes a collective study made on the heat source and the resulting temperature distribution in arc welds. Numerous methods have been developed to conduct transient temperature distribution studies on arc welds. Analytical approaches with constant material properties, numerical approaches with variable material properties, infrared imaging systems, machine vision systems with soft computing, etc. have been developed to facilitate understanding of transient temperature in arc welds. We first summarize heat source studies followed by literatures on various techniques and methods devoted to transient temperature investigations. Eventually, latest methods used for thermal studies, such as image processing, machine learning and intelligent systems are summarized and discussed.

