Accountability for automated harm
Patients, children, drivers, workers, consumers, and families have suffered serious harm when artificial intelligence systems, automated software, or connected technology failed in real-world use. The consequences can include missed diagnoses, unsafe recommendations, privacy exposure, financial loss, psychological harm and abuse, catastrophic injury, or death.
Federal guidance, enforcement actions, privacy concerns, child safety investigations, and documented failures involving automated systems have kept these risks in public view. When an algorithm, device, platform, or technology vendor changes the course of someone’s life, your family deserves to know whether a preventable safety failure played a role.


How Much Is Your Case Worth?

What Is AI and Tech Liability?
AI and tech liability refers to legal claims involving harm caused by artificial intelligence, automated decision systems, defective software, connected devices, digital platforms, or technology products.
These cases can involve physical injuries, medical errors, unsafe recommendations, data misuse, deceptive technology claims, algorithmic discrimination, cybersecurity failures, or wrongful death. The central question is whether a company designed, tested, marketed, monitored, or deployed technology in a way that exposed people to preventable harm.
Artificial intelligence is not exempt from accountability. If a technology company, manufacturer, hospital system, employer, platform, or product developer puts an unsafe system into the world, a legal investigation may examine what the company knew, what it failed to test, and whether safer choices were available.
How AI and Technology Can Cause Harm
AI and automated systems can affect people in high-stakes settings. These systems may influence medical care, vehicle safety, workplace decisions, financial access, consumer products, privacy, and online interactions. Potential sources of harm include:
Defective or Unsafe AI Systems
AI systems can produce inaccurate, biased, incomplete, or unsafe outputs. In some cases, users may rely on those outputs because the product was marketed as accurate, safe, or professionally useful.
Medical AI and Diagnostic Errors
Hospitals, clinicians, and device companies increasingly use AI-enabled tools in health care. The FDA maintains a public list of AI-enabled medical devices authorized for marketing in the United States, reflecting the growing role of AI in clinical settings. Medical AI cases may involve missed diagnoses, delayed treatment, incorrect risk scoring, unsafe triage decisions, or failure to warn clinicians about system limitations.
Autonomous and Semi-Autonomous Technology
Vehicles, industrial systems, drones, robotics, and automated equipment may rely on sensors, software, or machine-learning tools. When these systems fail to detect hazards, respond safely, or alert human operators, the result can be catastrophic.
Deceptive AI Claims
Some companies promote AI tools with claims about accuracy, safety, financial benefit, productivity, or human-like performance. The Federal Trade Commission has taken action against companies accused of using AI claims or AI-powered tools in deceptive or unfair ways that harmed consumers.
Data Privacy and Security Failures
AI products often depend on large amounts of personal data. A legal claim may investigate whether a company collected, stored, shared, trained on, or exposed sensitive information in a way that violated privacy commitments or consumer protection duties. The FTC has warned that companies cannot collect user data under one set of privacy promises and later quietly change those commitments after the fact.
Algorithmic Discrimination
Automated systems can affect employment, housing, lending, insurance, education, health care, and access to services. The EEOC has stated that federal employment discrimination laws protect workers when AI systems are used to discriminate based on protected characteristics.
Who May Be Affected by AI and Tech Failures?
AI and technology failures can affect people who never knowingly agreed to rely on an automated system. A patient may not know that AI helped interpret an image. A job applicant may not know that software screened them out. A consumer may not understand that a chatbot, platform, or connected product is collecting sensitive data or providing unsafe guidance.
People who may have claims include:
- Patients harmed by AI-assisted medical decisions
- Families affected by fatal or catastrophic technology failures
- Consumers misled by deceptive AI products or services
- Drivers, passengers, pedestrians, or workers injured by automated systems
- People whose sensitive data was misused, exposed, or used without proper consent
- Workers or applicants harmed by discriminatory automated hiring or workplace tools
- Users harmed by unsafe chatbots, recommendation systems, or digital platforms


AI and Technology Risks Involving Children
Children can face distinct risks from artificial intelligence, automated platforms, connected devices, and digital products. These cases may involve unsafe recommendations, harmful chatbot interactions, addictive platform design, exposure to inappropriate content, privacy violations, data collection from minors, defective educational technology, connected toys, or technology failures that contribute to physical injury.
A legal investigation may examine whether a company designed the product with children in mind, tested foreseeable child use, provided meaningful safeguards, warned parents about known risks, or ignored reports that the technology was harming minors.
When an AI or Tech Liability Claim May Be Investigated
A claim may be investigated when a technology product or automated system contributed to serious harm. The legal theory depends on the facts. Potential claims may involve:
Product Liability
A technology product may be defective because of its design, manufacturing, software architecture, warnings, instructions, testing, or post-market monitoring.
Negligence
A company may have failed to use reasonable care when designing, deploying, supervising, updating, or securing a technology system.
Failure to Warn
A company may have failed to clearly warn users, professionals, or the public about known limitations, foreseeable misuse, dangerous outputs, or safety risks.
Consumer Protection Violations
A company may have marketed an AI product with misleading claims about accuracy, reliability, safety, privacy, or financial benefit.
Medical Malpractice or Health Care Liability
When AI affects patient care, a case may examine the conduct of health care providers, hospitals, device manufacturers, software vendors, or other entities involved in the clinical decision.
Wrongful Death
If a technology failure contributed to a fatal injury, surviving family members may have the right to investigate whether a wrongful death claim exists.

Evidence That May Matter in an AI or Tech Liability Case
AI and tech liability cases often turn on technical evidence that is difficult for injured people to access without legal action.
Important evidence may include:
- Product design files
- Software specifications
- Training and validation data
- Safety testing records
- Internal risk assessments
- Human factors research
- User warnings and instructions
- Marketing claims
- Incident reports
- Complaint histories
- Error logs and audit trails
- Model update histories
- Cybersecurity records
- Contracts between vendors, hospitals, employers, or platforms
- Internal communications about known risks
Companies may blame the user, the professional, or the machine itself. A serious investigation looks deeper. It asks who built the system, who controlled it, who profited from it, who knew about the risk, and who had the power to prevent the harm.
Why AI and Tech Liability Cases Are Complex
AI and technology cases are rarely simple. Multiple companies may be involved in one product or system. A platform may use third-party software. A hospital may rely on a vendor tool. A device may include embedded software, cloud-based updates, sensors, and user interfaces. A product may change over time through patches, model updates, or new training data.
These cases can require investigation into engineering, medicine, cybersecurity, consumer protection, data governance, and regulatory compliance. NIST’s AI Risk Management Framework and related generative AI profile recognize that AI risk management includes issues such as safety, security, bias, transparency, privacy, and accountability.
That complexity should not stop families from asking hard questions. It makes early evidence preservation more important.

What To Do After Serious Harm Involving AI or Technology
After a serious injury, medical event, financial loss, privacy exposure, or death involving technology, take practical steps to protect yourself.
Preserve what you can. Save screenshots, product names, account records, emails, app messages, purchase confirmations, medical records, device packaging, incident reports, and communications with the company. Do not alter the device, delete the app, reset the system, or discard the product before speaking with a lawyer when the technology may be evidence.
Seek medical care when physical or emotional harm is involved. Report urgent safety risks to the appropriate provider, employer, platform, agency, or medical professional. Then speak with an attorney who can evaluate whether the facts support a legal investigation.
Contact McEldrew Purtell About an AI or Tech Liability Claim
AI and technology companies should not get a free pass because their products are complicated. If an automated system, AI tool, connected device, software platform, or technology product caused serious harm, McEldrew Purtell can review what happened and help you understand your options.
Contact McEldrew Purtell for a free consultation. We can listen to your story, evaluate the available facts, and determine whether a dangerous product, preventable technology failure, deceptive claim, or corporate safety breakdown may have played a role.

Learn More
Sepsis and Serious Infections Linked to Product Defect: Contaminated Reusable Scopes
In a number of occurrences, people have gone in for routine or necessary scope procedures and come out with severe infections, repeat hospitalizations, IV antibiotics, and in some cases, death. The danger is not limited to one brand or one…
Insulin Pump / CGM Failures: Over-Delivery, Under-Delivery, and Severe Outcomes
A defective or malfunctioning insulin pump does not just create inconvenience. It can lead to under-delivery or interruption of insulin, dangerous hyperglycemia, and emergency care. FDA reports on recent corrections describe hundreds of adverse events and dozens of injuries tied…
Untraceable Firearm Product Liability: The Legal Theories Families Ask About
When a shooting involves an untraceable firearm, families often ask the same painful question: how did a weapon with no serial number and no paper trail end up in someone’s hands? These cases are sometimes described as “ghost gun” incidents,…
Surgical Mesh Complications: What “Design Defect vs. Complication” Looks Like
Surgical mesh is used in many common procedures, including hernia repair and certain pelvic surgeries. When it works, it can reinforce weakened tissue and support healing. But when patients develop serious problems afterward like chronic pain, infection, erosion, adhesion, bowel…
Spinal Cord Stimulator Paralysis Claims: What Patients Should Know After KARE 11 Investigation
A recent KARE 11 investigation reported that multiple Minnesota patients say they were left paralyzed or seriously injured after receiving spinal cord stimulator implants for pain relief at Nura Pain Clinic in Edina. According to the reporting, three patients filed…
Cleaning Agents & Product Liability
Product Liability and Cleaning Agents: What Consumers Need to Know Household cleaning agents are marketed as products that protect families from germs and harmful bacteria. When these products are contaminated, improperly manufactured, or defectively designed, they can do the opposite,…
