Try the political quiz

46 Replies

 @9L4Z23BIndependent  from Pennsylvania answered…1wk1W

 @9MKD8QM from Florida answered…2wks2W

ASBOLUTELY NOT, and AI is not a person NOR a PEER which would be making a mockery of our legal system which is already plagued by several other issues.

 @9N92GYS from North Carolina answered…2 days2D

Perhaps eventually, but not until AI is far better researched and regulated to ensure that its decisions don’t reflect preexisting human biases and can be held accountable for mistakes

 @9N8LTDS from New Jersey answered…2 days2D

No, this sounds like a slippery slope of taking out compassion, empathy and sympathy-to take out the humanity in the judicial system.

 @9K99V29  from Florida answered…4 days4D

No, but the applications of artificial intelligence in such systems should be looked & invested into

 @9N33S8DIndependent from Alabama answered…5 days5D

No, AI should not be used to make decisions in criminal justice systems, but may be used to better facilitate research around the details of the trial by providing quicker turnaround of evidence and precedent for the judge and jury to be able to use to make a decision and appropriate sentencing.

 @rosetintedarcher answered…1wk1W

Yes, as long as it is trained and tested using a politically, ethnically and identifiable diverse range of human testers and trainers.

 @9MRL3HG from California answered…1wk1W

No, artificial intelligence should never be something for people to rely on, it should simply be used as a tool.

 @9MPNYMJ from Illinois answered…1wk1W

No, not unless the AI model being used by the government is carefully vetted for bias and the company or organization producing it is thoroughly scrutinized.

 @9MP2VM5Peace and Freedom from Texas answered…1wk1W

Most the times humans get taken by emotions and make wrong decisions in court and other times, AI would help not be unfair just because of sympathy.

 @9MNY3TS from New Jersey answered…1wk1W

Yes, but only for supplemental research and aiding in decision making. It should not be the final answer

 @9KWXHJM  from New York answered…2wks2W

 @9MN8926 from Kansas answered…2wks2W

Yes, but not to issue rulings and sentences, only to collect all potentially relevant case files and precedents so every defendant has a more fair trial overall.

 @9MN5L4RWomen’s Equality from California answered…2wks2W

 @9MN4PGY from California answered…2wks2W

yes, how ever Ai needs to be inspected and absolutely proven. All juros needs to be educated about AI because it is very trippy

 @9MLXQTT  from New York answered…2wks2W

I think it depends, AI gets there "minds" from whoever programs and creates it so how would the system know if the AI is biased of not.

 @9K99V29  from Florida answered…2wks2W

No, but the applications of artificial intelligence in criminal justice systems should be looked into

 @SenBR2003 from New York answered…2wks2W

Begin with implementing AI in specialized mock criminal trials to study their effectiveness, then adjust AI programs accordingly before gradually including them in criminal trials.

 @9MMB43J from California answered…2wks2W

 @9MM92DH from Texas answered…2wks2W

 @9MM844W from Wisconsin answered…2wks2W

no because Ai will choose purely what is legal and illegal, they lack human emotions and the understanding in situations where someone either deserves justice from being raped or a family member killed or situations where it was in self defense.

 @9MM7NMZ from Minnesota answered…2wks2W

 @9MM5PH4 from New Jersey answered…2wks2W

 @9MM5C62 from Minnesota answered…2wks2W

 @bahzilfr from Pennsylvania answered…2wks2W

As of right now, no. As they improve, it may be able to be used but it would need massive checks for biases first.

 @9MM4NFT from Virginia answered…2wks2W

I think it can help give objective thought processes, but I do not think it should be the end all be all.

 @9MM2ZQVIndependent from North Carolina answered…2wks2W

It could be usful, but it could just as well be hacked and wrongfully free a bunch of criminals

 @9MM288V from New Jersey answered…2wks2W

Somewhat, I think they can help go deeper into a case but shouldn't be used to make a full decision.

 @9MM232Q from Missouri answered…2wks2W

yes and no I think that it wouldn't make best depositions but I don't thank that the laws not always right and in some cases it should be up to how moral the decision is

 @Dry550Independent  from Illinois answered…2wks2W

Yes, a machine has no moral say on matters, it can execute a sentence or assist in law enforcement without second guessing itself

 @9MLWF3GGreen from Texas answered…2wks2W

 @9MLW3J5Justice party memberfrom Maine answered…2wks2W

 @9MLVKDYDemocrat from New York answered…2wks2W

No, but it should be used to assist in analyzing the facts of a case

 @9MLMS5Y from Kansas answered…2wks2W

An AI model could be implemented to compare and contrast court findings and rulings and eliminate bias.

 @9MLKF77Independent from Georgia answered…2wks2W

No - the technology is not ready for something like this. However, I'll re-evaluated this for 2028.

 @9MLGS34 from Pennsylvania answered…2wks2W

There’s some instances where AI is helpful, and others where AI won’t be helpful

 @9MLF9S8 from California answered…2wks2W

No, but it is instead used to sum up the information in a case to provide a clearer picture of all evidence provided, not to make decisions.

 @9MLF5VJ from New Mexico answered…2wks2W

No, AI should not people do things because of emotions and other people can feel emotions but robots can't.

 @3JZDMSDIndependent answered…2wks2W

Yes, as long as there is a governance committee driving the personas, parameters and workflows in use, and 4 sigma plus quality evaluations.

  @Spartan0536 from Florida answered…2wks2W

ABSOLUTELY NOT! This is a gross perversion of our legal system as an AI is not a "peer".

 @9ML5WGR from Wisconsin answered…2wks2W

Yes, as long as we can be sure it’s programmed to eliminate bias AND is used as a tool for people to make decisions and it isn’t making the decision itself.

 @9MKXTDH from Michigan answered…2wks2W

I have never given this any thought before… I can see both sides of the issue, tbh.

 @9MKVB24 from California answered…2wks2W

 @9MK7TRBRepublican from California answered…2wks2W

Maybe once we can prove it’s ready. It would be better than humans. Humans are faulty and make mistakes

Engagement

The historical activity of users engaging with this question.

Loading data...

Loading chart... 

Demographics

Loading the political themes of users that engaged with this discussion

Loading data...