Social and Mental Therapy with Artificial Animals
A proposal to construct interactive synthetic animals—huggable, bipedal robots containing intuitive software for emotional companionship.
- Huggable, animal-inspired companions
- Auditory, gestural, and analogue interaction
- Emotive cues designed for trust and ease
- Self-programming behavior without cloud dependency
Description
The proposed robotic platform employs auditory, gestural, and analogue-based interaction to create an “old school” environment. Each robot is disguised as a plush animal (fox, bear, rabbit, etc.) that becomes a friend to the adopter. The exterior is realistic and designed to meet human expectations of emotional behavior.
Interactive synthetic animals designed for emotional companionship and social support.
Economic Rationale / Business Model
During times of crisis, people often ignore professional advice and engage in risky behavior. SMTWAA proposes emotional plush robotic companions to provide solace and reduce harmful behavior by offering consistent, approachable support.
Human Need
Companionship during stress and in daily life.
Affordability
Designed to be well‑priced and broadly accessible.
Trust
No external connectivity to promote privacy and comfort.
Adaptation
Self‑programming behaviors to fit individual users.
Bottlenecks & Risks
- Public aversion to attachment with robots.
- Adoption rate required for a profitable model.
- Potential misidentification of user emotional states.
- Fatigue with scare-mongering of AI taking over.
- Failure of current AI models and strategies for human companionship.
Additional Information
Success depends on individualized cognitive therapy and removing cultural barriers to non‑human companions. Synthetic animals emphasize familiar emotions while avoiding human‑like appearance, encouraging comfort and attachment. Long‑term adoption will also require transparent ethics, caregiver training, and clear safety expectations so users and families understand boundaries, privacy, and data stewardship. Community pilots, culturally specific narratives, and clinician‑guided onboarding can normalize trust while preserving the non‑human identity that makes synthetic companions feel safe. The platform should also reflect longer‑life computing values: durable, repairable systems, readable and modifiable software, and freedom from forced upgrades or cloud dependency. A human‑scale architecture that favors resilience, transparency, and graceful degradation supports trust in long‑running companionship deployments.
EU Funding Secured
FETOPEN-01-2018-2019-2020 - Challenging Current Thinking
Continuing Budget Requirements
$10,000,000