Fussy, Hungry, or Even in Pain? Scientists Create an AI Tool to Tell Babies' Cries Apart

Every parent knows the frustration of responding to a baby's cries, wondering if it is hungry, wet, tired, in need of a hug, or perhaps even in pain. A group of researchers in USA has devised a new artificial intelligence method that can identify and distinguish between normal cry signals and abnormal ones, such as those resulting from an underlying illness. The method, based on a cry language recognition algorithm, promises to be useful to parents at home as well as in healthcare settings, as doctors may use it to discern cries among sick children.

The research was published in the May issue of IEEE/CAA Journal of Automatica Sinica (JAS), a joint publication of the IEEE and the Chinese Association of Automation.

Experienced health care workers and seasoned parents are able to pretty accurately distinguish among a baby's many needs based on the crying sounds it makes. While each baby's cry is unique, they share some common features when they result from the same reasons. Identifying the hidden patterns in the cry signal has been a major challenge, and artificial intelligence applications have now been shown to be an appropriate solution within this context.

The new research uses a specific algorithm based on automatic speech recognition to detect and recognize the features of infant cries. In order to analyze and classify those signals, the team used compressed sensing as a way to process big data more efficiently. Compressed sensing is a process that reconstructs a signal based on sparse data and is especially useful when sounds are recorded in noisy environments, which is where baby cries typically take place. In this study, the researchers designed a new cry language recognition algorithm which can distinguish the meanings of both normal and abnormal cry signals in a noisy environment. The algorithm is independent of the individual crier, meaning that it can be used in a broader sense in practical scenarios as a way to recognize and classify various cry features and better understand why babies are crying and how urgent the cries are.

"Like a special language, there are lots of health-related information in various cry sounds. The differences between sound signals actually carry the information. These differences are represented by different features of the cry signals. To recognize and leverage the information, we have to extract the features and then obtain the information in it," says Lichuan Liu, corresponding author and Associate Professor of Electrical Engineering and the Director of Digital Signal Processing Laboratory whose group conducted the research.

The researchers hope that the findings of their study could be applicable to several other medical care circumstances in which decision making relies heavily on experience. "The ultimate goals are healthier babies and less pressure on parents and care givers," says Liu. "We are looking into collaborations with hospitals and medical research centers, to obtain more data and requirement scenario input, and hopefully we could have some products for clinical practice," she adds.

Lichuan Liu, Wei Li, Xianwen Wu, Benjamin X Zhou.
Infant Cry Language Analysis and Recognition: An Experimental Approach.
IEEE/CAA J. of Autom. Sinica, vol. 6, no. 3, pp. 778-788, May 2019. doi: 10.1109/JAS.2019.1911435.

Most Popular Now

Apple Health Records Available for Allsc…

Allscripts (NASDAQ: MDRX) announced that Apple Health Records is now available for Allscripts Sunrise™, TouchWorks® and Professional EHR™ clients and their patients. Health Records brings together hospitals, clinics and the...

Philips Signs Agreement to Create Taiwan…

Royal Philips (NYSE: PHG, AEX: PHIA), a global leader in health technology, today announced that Taipei Veterans General Hospital (TPVGH) will utilize the Philips IntelliSite Pathology Solution to transform its...

Robotic Thread is Designed to Slip throu…

MIT engineers have developed a magnetically steerable, thread-like robot that can actively glide through narrow, winding pathways, such as the labrynthine vasculature of the brain. In the future, this robotic...

St Helens and Knowsley Advance with Ambi…

St Helens and Knowsley Teaching Hospitals NHS Trusthas successfully gone live with System C’s CareFlow Vitals as part of its ambitious strategy to accelerate digitisation and become a digital exemplar...

Machine Learning Improves the Diagnosis …

Researchers from Charité - Universitätsmedizin Berlin and the German Cancer Consortium (DKTK) have successfully solved a longstanding problem in the diagnosis of head and neck cancers. Working alongside colleagues from...

Experimental Validation Confirms the Abi…

Insilico Medicine, a global leader in artificial intelligence for drug discovery, today announced the publication of a paper titled, "Deep learning enables rapid identification of potent DDR1 kinase inhibitors," in...

Using a Smartphone to Detect Norovirus

A little bit of norovirus - the highly infectious microbe that causes about 20 million cases of food poisoning in the United States each year - goes a long way...

Computer Model could Help Test New Sickl…

A team of Brown University researchers has developed a new computer model that simulates the way red blood cells become misshapen by sickle cell disease. The model, described in a...

The Future of Mind Control

Electrodes implanted in the brain help alleviate symptoms like the intrusive tremors associated with Parkinson's disease. But current probes face limitations due to their size and inflexibility. "The brain is...

Medical Informatics Europe Conference 20…

28 April - 1 May 2020, Geneva, Switzerland. The European Federation of Medical Informatics (EFMI) presents the 30th Medical Informatics Europe conference (MIE) at the Geneva International Conference Center (CICG). This...