Abstract:Flexible substrates have become essential in order to provide increased flexibility in wearable sensors, including polymers, plastic, paper, textiles and fabrics. This study is to comprehensively summarize the bending capabilities of flexible polymer substrate for general Internet of Things (IoTs) applications. The basic premise is to investigate the flexibility and bending ability of polymer materials as well as their tendency to withstand deformation. We start by providing a chronological order of flexible materials which have been used during the last few decades. In the future, the IoT is expected to support a diverse set of technologies to enable new applications through wireless connectivity. For wearable IoTs, flexibility and bending capabilities of materials are required. This paper provides an overview of some abundantly used polymer substrates and compares their physical, electrical and mechanical properties. It also studies the bending effects on the radiation performance of antenna designs that use polymer substrates. Moreover, we explore a selection of flexible materials for flexible antennas in IoT applications, namely Polyimides (PI), Polyethylene Terephthalate (PET), Polydimethylsiloxane (PDMS), Polytetrafluoroethylene (PTFE), Rogers RT/Duroid and Liquid Crystal Polymer (LCP). The study includes a complete analysis of bending and folding effects on the radiation characteristics such as S-parameters, resonant frequency deviation and the impedance mismatch with feedline of the flexible polymer substrate microstrip antennas. These flexible polymer substrates are useful for future wearable devices and general IoT applications.Keywords: polymers substrates; flexible electronics; flexible antennas; internet of things (IoTs); wearable applications
We conducted a systematic review using PubMed and Google Scholar. Relevant publications published before (including) December 2021 were identified and evaluated for inclusion. The objective was to conduct a systematic review of computer vision for facial behavior analysis in schizophrenia studies, the clinical findings, and the corresponding data processing and machine learning methods.
Funding: Research reported in this publication was supported in part by Imagine, Innovate and Impact (I3) Funds from the Emory School of Medicine and through the Georgia CTSA NIH award (UL1-TR002378). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Although facial expressions can be identified with the help of trained experts [51, 53], manual identification fails to scale due to time and financial constraints and are not feasible in a busy outpatient clinic. Furthermore, due to the lack of easily reproducible standards for facial expressions, the field is yet to develop an objective consensus definition on what precisely constitutes affective flattening or other facial abnormalities in schizophrenia. Automated computer vision techniques may help to solve some of these challenges, as advances in affective computing have made it easier and cheaper to analyze a large amount of data while providing a consistent way to quantify facial behaviors. With improving technology harnessing advances in temporal and spatial granularity, computer vision based analysis has the potential to allow researchers to better understand the phenomenology of schizophrenia and differentiate those with schizophrenia from without it, and to help to subtype schizophrenia based on digital phenotypes. Additionally computer vision can objectively introduce non-verbal facial behavior data into the clinical area, allowing for clinicians to better identify negative symptoms and monitor treatment response from medication and psychosocial treatments. The systematic review serves as a road map for researchers to understand the current approaches, technical parameters, and existing challenges when using computer vision to analyze facial movements in patients with schizophrenia.
Another barrier in applying computer vision to schizophrenia research is the lack of the open-source, state-of-the-art computer vision toolbox that is specifically designed for psychiatric facial behavior analysis. The most widely used currently is Openface 2.0  that was released in 2016. Although it covers a wide range of analysis, such as head tracking, facial AU recognition, and gaze tracking, the methods used perform significantly poorer than the latest deep learning-based methods (such as JAA-Net ). Furthermore, since Openface is not specifically designed for psychiatry studies, it only focused on the frame-level behavior recognition without implementing any video-level analysis. Lastly, the interface can be difficult for researchers without previous experience in programming. Therefore, the next generation of the open-source toolbox that aims to tackle these issues might help accelerate the use of computer vision in schizophrenia. 2b1af7f3a8