Page 1 of 2

Automate freezes/bans when someone uses racist slurs or any mentions of self-harm

Posted: Sun Oct 23, 2022 8:09 pm
by parhelia_0000
I've been noticing this trend in a lot of the DB duels myself and others have been playing in, whether it be advanced or customs. I play a meta deck or any custom deck I created, and my opponent starts insulting me by using racist/sexist slurs or even mentions self-harm/suicide in the chat.

Yes, Report Abuse feature is an option; however, we shouldn't have to constantly resort to that every single time we deal with a toxic opponent. It's come to a point where players can no longer be trusted to show respect towards one another.

Example replays where people who have zero concept of dignity/respect decide to insult their opponents with slurs and mentions of self-harm:
https://www.duelingbook.com/replay?id=37528-43198189
https://www.duelingbook.com/replay?id=37528-43826971

As such, I am requesting that select slurs/terminologies be used in an automated system to automatically freeze players after 3 warnings. The way it works is this - if a player uses certain terms in the duel chat, the automated system will kick in and prevent the chat from being posted, and will warn the player of the banned terms. After 3 warnings, the automated system will freeze the player.

I understand that this will cause a lot of controversies, but I'm starting to get sick and tired of dealing with duelists who show no respect towards one another. I am hoping that with the introduction of the automated system, the admins will have one less thing to worry about and focus more of their attention towards dealing with rulings in rated games. Thank you.

Re: Automate freezes/bans when someone uses racist slurs or any mentions of self-harm

Posted: Sun Oct 23, 2022 8:32 pm
by PENMASTER
this is what happens you play customs kiddos

Re: Automate freezes/bans when someone uses racist slurs or any mentions of self-harm

Posted: Sun Oct 23, 2022 8:47 pm
by Genexwrecker
parhelia_0000 wrote:I've been noticing this trend in a lot of the DB duels myself and others have been playing in, whether it be advanced or customs. I play a meta deck or any custom deck I created, and my opponent starts insulting me by using racist/sexist slurs or even mentions self-harm/suicide in the chat.

Yes, Report Abuse feature is an option; however, we shouldn't have to constantly resort to that every single time we deal with a toxic opponent. It's come to a point where players can no longer be trusted to show respect towards one another.

Example replays where people who have zero concept of dignity/respect decide to insult their opponents with slurs and mentions of self-harm:
https://www.duelingbook.com/replay?id=37528-43198189
https://www.duelingbook.com/replay?id=37528-43826971

As such, I am requesting that select slurs/terminologies be used in an automated system to automatically freeze players after 3 warnings. The way it works is this - if a player uses certain terms in the duel chat, the automated system will kick in and prevent the chat from being posted, and will warn the player of the banned terms. After 3 warnings, the automated system will freeze the player.

I understand that this will cause a lot of controversies, but I'm starting to get sick and tired of dealing with duelists who show no respect towards one another. I am hoping that with the introduction of the automated system, the admins will have one less thing to worry about and focus more of their attention towards dealing with rulings in rated games. Thank you.
leaving freezes to an automated system is a very bad idea. The more we rely on a robot the less its regulated and more mistakes can occur. We cant train a bot like a real judge and I have a feeling a lot of illegitimate freezes would be issued.

Re: Automate freezes/bans when someone uses racist slurs or any mentions of self-harm

Posted: Sun Oct 23, 2022 8:52 pm
by parhelia_0000
Genexwrecker wrote:
parhelia_0000 wrote:I've been noticing this trend in a lot of the DB duels myself and others have been playing in, whether it be advanced or customs. I play a meta deck or any custom deck I created, and my opponent starts insulting me by using racist/sexist slurs or even mentions self-harm/suicide in the chat.

Yes, Report Abuse feature is an option; however, we shouldn't have to constantly resort to that every single time we deal with a toxic opponent. It's come to a point where players can no longer be trusted to show respect towards one another.

Example replays where people who have zero concept of dignity/respect decide to insult their opponents with slurs and mentions of self-harm:
https://www.duelingbook.com/replay?id=37528-43198189
https://www.duelingbook.com/replay?id=37528-43826971

As such, I am requesting that select slurs/terminologies be used in an automated system to automatically freeze players after 3 warnings. The way it works is this - if a player uses certain terms in the duel chat, the automated system will kick in and prevent the chat from being posted, and will warn the player of the banned terms. After 3 warnings, the automated system will freeze the player.

I understand that this will cause a lot of controversies, but I'm starting to get sick and tired of dealing with duelists who show no respect towards one another. I am hoping that with the introduction of the automated system, the admins will have one less thing to worry about and focus more of their attention towards dealing with rulings in rated games. Thank you.
leaving freezes to an automated system is a very bad idea. The more we rely on a robot the less its regulated and more mistakes can occur. We cant train a bot like a real judge and I have a feeling a lot of illegitimate freezes would be issued.

While I do understand the concerns of dealing with illegitimate freezes, there is an even bigger issue of DB players going a step TOO far and encouraging self-harm. The last thing I want to deal with is another @Shadow Player 2.0 who encourages self-harm/suicide.

And rest assured, a report has already been made based on this duel. However, I do feel the need to call people out when things go too far. It's coming to a point where DB is becoming YSFlight 2.0. I'm not making this up, I used to be part of that flight sim community and even they had a few members who actively encouraged self-harm.

I want to see DB admins enforcing at least SOMETHING that will protect members like me from toxic members who think that the topic of self-harm/suicide is nothing more than a joke.

Re: Automate freezes/bans when someone uses racist slurs or any mentions of self-harm

Posted: Sun Oct 23, 2022 9:19 pm
by greg503
parhelia_0000 wrote:
Genexwrecker wrote:
parhelia_0000 wrote:I've been noticing this trend in a lot of the DB duels myself and others have been playing in, whether it be advanced or customs. I play a meta deck or any custom deck I created, and my opponent starts insulting me by using racist/sexist slurs or even mentions self-harm/suicide in the chat.

Yes, Report Abuse feature is an option; however, we shouldn't have to constantly resort to that every single time we deal with a toxic opponent. It's come to a point where players can no longer be trusted to show respect towards one another.

Example replays where people who have zero concept of dignity/respect decide to insult their opponents with slurs and mentions of self-harm:
https://www.duelingbook.com/replay?id=37528-43198189
https://www.duelingbook.com/replay?id=37528-43826971

As such, I am requesting that select slurs/terminologies be used in an automated system to automatically freeze players after 3 warnings. The way it works is this - if a player uses certain terms in the duel chat, the automated system will kick in and prevent the chat from being posted, and will warn the player of the banned terms. After 3 warnings, the automated system will freeze the player.

I understand that this will cause a lot of controversies, but I'm starting to get sick and tired of dealing with duelists who show no respect towards one another. I am hoping that with the introduction of the automated system, the admins will have one less thing to worry about and focus more of their attention towards dealing with rulings in rated games. Thank you.
leaving freezes to an automated system is a very bad idea. The more we rely on a robot the less its regulated and more mistakes can occur. We cant train a bot like a real judge and I have a feeling a lot of illegitimate freezes would be issued.

While I do understand the concerns of dealing with illegitimate freezes, there is an even bigger issue of DB players going a step TOO far and encouraging self-harm. The last thing I want to deal with is another @Shadow Player 2.0 who encourages self-harm/suicide.

And rest assured, a report has already been made based on this duel. However, I do feel the need to call people out when things go too far. It's coming to a point where DB is becoming YSFlight 2.0. I'm not making this up, I used to be part of that flight sim community and even they had a few members who actively encouraged self-harm.

I want to see DB admins enforcing at least SOMETHING that will protect members like me from toxic members who think that the topic of self-harm/suicide is nothing more than a joke.

This is what we have posting.php?mode=post&f=7 for

Re: Automate freezes/bans when someone uses racist slurs or any mentions of self-harm

Posted: Sun Oct 23, 2022 9:42 pm
by Fredblade
>Makes broken OP customs that are not fun to anyone
>Gets called out for it
>Starts to make strawman arguments like "muh meta decks" and scolds people about "muh competitive yugioh"
>"Why are people so rude to me?"

Re: Automate freezes/bans when someone uses racist slurs or any mentions of self-harm

Posted: Sun Oct 23, 2022 10:02 pm
by Christen57
parhelia_0000 wrote:
Genexwrecker wrote:
parhelia_0000 wrote:I've been noticing this trend in a lot of the DB duels myself and others have been playing in, whether it be advanced or customs. I play a meta deck or any custom deck I created, and my opponent starts insulting me by using racist/sexist slurs or even mentions self-harm/suicide in the chat.

Yes, Report Abuse feature is an option; however, we shouldn't have to constantly resort to that every single time we deal with a toxic opponent. It's come to a point where players can no longer be trusted to show respect towards one another.

Example replays where people who have zero concept of dignity/respect decide to insult their opponents with slurs and mentions of self-harm:
https://www.duelingbook.com/replay?id=37528-43198189
https://www.duelingbook.com/replay?id=37528-43826971

As such, I am requesting that select slurs/terminologies be used in an automated system to automatically freeze players after 3 warnings. The way it works is this - if a player uses certain terms in the duel chat, the automated system will kick in and prevent the chat from being posted, and will warn the player of the banned terms. After 3 warnings, the automated system will freeze the player.

I understand that this will cause a lot of controversies, but I'm starting to get sick and tired of dealing with duelists who show no respect towards one another. I am hoping that with the introduction of the automated system, the admins will have one less thing to worry about and focus more of their attention towards dealing with rulings in rated games. Thank you.
leaving freezes to an automated system is a very bad idea. The more we rely on a robot the less its regulated and more mistakes can occur. We cant train a bot like a real judge and I have a feeling a lot of illegitimate freezes would be issued.

While I do understand the concerns of dealing with illegitimate freezes, there is an even bigger issue of DB players going a step TOO far and encouraging self-harm. The last thing I want to deal with is another @Shadow Player 2.0 who encourages self-harm/suicide.

And rest assured, a report has already been made based on this duel. However, I do feel the need to call people out when things go too far. It's coming to a point where DB is becoming YSFlight 2.0. I'm not making this up, I used to be part of that flight sim community and even they had a few members who actively encouraged self-harm.

I want to see DB admins enforcing at least SOMETHING that will protect members like me from toxic members who think that the topic of self-harm/suicide is nothing more than a joke.


There's no need for you or anyone else to be "protected" from toxic users on the site, unless you simply stay off the site. If people are being too toxic, you let the staff handle it, but the staff cannot and will not stop that toxicity from starting in the first place — they can only, and will only, punish it.

Also, Genexwrecker's right. Relying on artificial intelligence to act as a judge in any duel is nowhere near feasible at this time, as we haven't reached that point where artificial intelligence is advanced enough, and suitable enough, for that. Too many things would go wrong.

What if someone says a bad word completely by accident, like saying the F-word when they meant to say "Duck" because they accidentally hit the F key (right next to the D key on the keyboard) instead of the D key, or saying "Shit" when they meant to say "Shut" or "Shot"?

What if someone says a word that the system thinks is a bad word but really isn't (which does happen in customs, when the system was mistaking harmless words like "Buggy," "Thorny," "Cucumber," "Pussycat," and "Leafage" as bad words in the past and still sometimes mistakes harmless words as bad words)? I currently can't make any custom with "pussycat" in it (it will say it's inappropriate for public use) even though duelingbook has a card called Nekogal #1 whose flavor text literally says "A pussycat-fairy. Contrary to her lovely beauty, she claws on her enemies."

These are things artificial intelligence can't take into consideration (that it would need to) because it's not yet advanced enough to.

Re: Automate freezes/bans when someone uses racist slurs or any mentions of self-harm

Posted: Sun Oct 23, 2022 10:27 pm
by parhelia_0000
Christen57 wrote:
parhelia_0000 wrote:
Genexwrecker wrote:leaving freezes to an automated system is a very bad idea. The more we rely on a robot the less its regulated and more mistakes can occur. We cant train a bot like a real judge and I have a feeling a lot of illegitimate freezes would be issued.

While I do understand the concerns of dealing with illegitimate freezes, there is an even bigger issue of DB players going a step TOO far and encouraging self-harm. The last thing I want to deal with is another @Shadow Player 2.0 who encourages self-harm/suicide.

And rest assured, a report has already been made based on this duel. However, I do feel the need to call people out when things go too far. It's coming to a point where DB is becoming YSFlight 2.0. I'm not making this up, I used to be part of that flight sim community and even they had a few members who actively encouraged self-harm.

I want to see DB admins enforcing at least SOMETHING that will protect members like me from toxic members who think that the topic of self-harm/suicide is nothing more than a joke.


There's no need for you or anyone else to be "protected" from toxic users on the site, unless you simply stay off the site. If people are being too toxic, you let the staff handle it, but the staff cannot and will not stop that toxicity from starting in the first place — they can only, and will only, punish it.

Also, Genexwrecker's right. Relying on artificial intelligence to act as a judge in any duel is nowhere near feasible at this time, as we haven't reached that point where artificial intelligence is advanced enough, and suitable enough, for that. Too many things would go wrong.

What if someone says a bad word completely by accident, like saying the F-word when they meant to say "Duck" because they accidentally hit the F key (right next to the D key on the keyboard) instead of the D key, or saying "Shit" when they meant to say "Shut" or "Shot"?

What if someone says a word that the system thinks is a bad word but really isn't (which does happen in customs, when the system was mistaking harmless words like "Buggy," "Thorny," "Cucumber," "Pussycat," and "Leafage" as bad words in the past and still sometimes mistakes harmless words as bad words)? I currently can't make any custom with "pussycat" in it (it will say it's inappropriate for public use) even though duelingbook has a card called Nekogal #1 whose flavor text literally says "A pussycat-fairy. Contrary to her lovely beauty, she claws on her enemies."

These are things artificial intelligence can't take into consideration (that it would need to) because it's not yet advanced enough to.

1. AI technology is getting better and better with every iteration. Ever heard of Moore's Law? With faster processing technology, automation becomes more and more efficient, and as such, we can rule out the possibility that the AI will make mistakes.

2. Since DB is originally designed to be a desktop platform and there's no plans for mobile compatibility, there should be NO excuse as to why people would be making silly typos that could result in "accidental" terms being spurred out (unless of course, you have butter fingers, in which case, carry on).

3. There are ways to make the system specifically target words that are offensive. I've seen it done with Discord bots, and if they can do it, why not DB?

Re: Automate freezes/bans when someone uses racist slurs or any mentions of self-harm

Posted: Sun Oct 23, 2022 10:32 pm
by Renji Asuka
parhelia_0000 wrote:
Christen57 wrote:
parhelia_0000 wrote:While I do understand the concerns of dealing with illegitimate freezes, there is an even bigger issue of DB players going a step TOO far and encouraging self-harm. The last thing I want to deal with is another @Shadow Player 2.0 who encourages self-harm/suicide.

And rest assured, a report has already been made based on this duel. However, I do feel the need to call people out when things go too far. It's coming to a point where DB is becoming YSFlight 2.0. I'm not making this up, I used to be part of that flight sim community and even they had a few members who actively encouraged self-harm.

I want to see DB admins enforcing at least SOMETHING that will protect members like me from toxic members who think that the topic of self-harm/suicide is nothing more than a joke.


There's no need for you or anyone else to be "protected" from toxic users on the site, unless you simply stay off the site. If people are being too toxic, you let the staff handle it, but the staff cannot and will not stop that toxicity from starting in the first place — they can only, and will only, punish it.

Also, Genexwrecker's right. Relying on artificial intelligence to act as a judge in any duel is nowhere near feasible at this time, as we haven't reached that point where artificial intelligence is advanced enough, and suitable enough, for that. Too many things would go wrong.

What if someone says a bad word completely by accident, like saying the F-word when they meant to say "Duck" because they accidentally hit the F key (right next to the D key on the keyboard) instead of the D key, or saying "Shit" when they meant to say "Shut" or "Shot"?

What if someone says a word that the system thinks is a bad word but really isn't (which does happen in customs, when the system was mistaking harmless words like "Buggy," "Thorny," "Cucumber," "Pussycat," and "Leafage" as bad words in the past and still sometimes mistakes harmless words as bad words)? I currently can't make any custom with "pussycat" in it (it will say it's inappropriate for public use) even though duelingbook has a card called Nekogal #1 whose flavor text literally says "A pussycat-fairy. Contrary to her lovely beauty, she claws on her enemies."

These are things artificial intelligence can't take into consideration (that it would need to) because it's not yet advanced enough to.

1. AI technology is getting better and better with every iteration. Ever heard of Moore's Law? With faster processing technology, automation becomes more and more efficient, and as such, we can rule out the possibility that the AI will make mistakes.

2. Since DB is originally designed to be a desktop platform and there's no plans for mobile compatibility, there should be NO excuse as to why people would be making silly typos that could result in "accidental" terms being spurred out (unless of course, you have butter fingers, in which case, carry on).

3. There are ways to make the system specifically target words that are offensive. I've seen it done with Discord bots, and if they can do it, why not DB?

Meanwhile Google, Twitter, and Facebook use AI to ban people and 9/10, it was wrongful termination

Re: Automate freezes/bans when someone uses racist slurs or any mentions of self-harm

Posted: Sun Oct 23, 2022 11:28 pm
by Genexwrecker
you have the ability to block users you do not wish to interract with. If they say 1 or 2 words to you then ignoring them and blocking them is an incredible option. Banning or freezing people for 1 insult or cuss word is not only extremely draconian but not how the world works anywhere outside of socialist states. Nobody is going to get arrested for calling somebody the n word once while passing them by on the street. We have rules and guidelines and they are actually very reasonable. harassing people generally gets you a warning then a temporary freeze. if you constantly commit the same infraction and show no intention to stop after several chances and punishments then you get removed from the site entirely. When you block harassers you take away all their power this is why we have implemented the feature so that we dont need to freeze everyone on the site constantly. If I froze everyone who called a person noob or moron or the r word half the site would be permanently banned at this point.

Re: Automate freezes/bans when someone uses racist slurs or any mentions of self-harm

Posted: Sun Oct 23, 2022 11:28 pm
by Genexwrecker
Renji Asuka wrote:
parhelia_0000 wrote:
Christen57 wrote:
There's no need for you or anyone else to be "protected" from toxic users on the site, unless you simply stay off the site. If people are being too toxic, you let the staff handle it, but the staff cannot and will not stop that toxicity from starting in the first place — they can only, and will only, punish it.

Also, Genexwrecker's right. Relying on artificial intelligence to act as a judge in any duel is nowhere near feasible at this time, as we haven't reached that point where artificial intelligence is advanced enough, and suitable enough, for that. Too many things would go wrong.

What if someone says a bad word completely by accident, like saying the F-word when they meant to say "Duck" because they accidentally hit the F key (right next to the D key on the keyboard) instead of the D key, or saying "Shit" when they meant to say "Shut" or "Shot"?

What if someone says a word that the system thinks is a bad word but really isn't (which does happen in customs, when the system was mistaking harmless words like "Buggy," "Thorny," "Cucumber," "Pussycat," and "Leafage" as bad words in the past and still sometimes mistakes harmless words as bad words)? I currently can't make any custom with "pussycat" in it (it will say it's inappropriate for public use) even though duelingbook has a card called Nekogal #1 whose flavor text literally says "A pussycat-fairy. Contrary to her lovely beauty, she claws on her enemies."

These are things artificial intelligence can't take into consideration (that it would need to) because it's not yet advanced enough to.

1. AI technology is getting better and better with every iteration. Ever heard of Moore's Law? With faster processing technology, automation becomes more and more efficient, and as such, we can rule out the possibility that the AI will make mistakes.

2. Since DB is originally designed to be a desktop platform and there's no plans for mobile compatibility, there should be NO excuse as to why people would be making silly typos that could result in "accidental" terms being spurred out (unless of course, you have butter fingers, in which case, carry on).

3. There are ways to make the system specifically target words that are offensive. I've seen it done with Discord bots, and if they can do it, why not DB?

Meanwhile Google, Twitter, and Facebook use AI to ban people and 9/10, it was wrongful termination

you forgot youtube

Re: Automate freezes/bans when someone uses racist slurs or any mentions of self-harm

Posted: Sun Oct 23, 2022 11:34 pm
by Genexwrecker
parhelia_0000 wrote:
Christen57 wrote:
parhelia_0000 wrote:While I do understand the concerns of dealing with illegitimate freezes, there is an even bigger issue of DB players going a step TOO far and encouraging self-harm. The last thing I want to deal with is another @Shadow Player 2.0 who encourages self-harm/suicide.

And rest assured, a report has already been made based on this duel. However, I do feel the need to call people out when things go too far. It's coming to a point where DB is becoming YSFlight 2.0. I'm not making this up, I used to be part of that flight sim community and even they had a few members who actively encouraged self-harm.

I want to see DB admins enforcing at least SOMETHING that will protect members like me from toxic members who think that the topic of self-harm/suicide is nothing more than a joke.


There's no need for you or anyone else to be "protected" from toxic users on the site, unless you simply stay off the site. If people are being too toxic, you let the staff handle it, but the staff cannot and will not stop that toxicity from starting in the first place — they can only, and will only, punish it.

Also, Genexwrecker's right. Relying on artificial intelligence to act as a judge in any duel is nowhere near feasible at this time, as we haven't reached that point where artificial intelligence is advanced enough, and suitable enough, for that. Too many things would go wrong.

What if someone says a bad word completely by accident, like saying the F-word when they meant to say "Duck" because they accidentally hit the F key (right next to the D key on the keyboard) instead of the D key, or saying "Shit" when they meant to say "Shut" or "Shot"?

What if someone says a word that the system thinks is a bad word but really isn't (which does happen in customs, when the system was mistaking harmless words like "Buggy," "Thorny," "Cucumber," "Pussycat," and "Leafage" as bad words in the past and still sometimes mistakes harmless words as bad words)? I currently can't make any custom with "pussycat" in it (it will say it's inappropriate for public use) even though duelingbook has a card called Nekogal #1 whose flavor text literally says "A pussycat-fairy. Contrary to her lovely beauty, she claws on her enemies."

These are things artificial intelligence can't take into consideration (that it would need to) because it's not yet advanced enough to.

1. AI technology is getting better and better with every iteration. Ever heard of Moore's Law? With faster processing technology, automation becomes more and more efficient, and as such, we can rule out the possibility that the AI will make mistakes.

2. Since DB is originally designed to be a desktop platform and there's no plans for mobile compatibility, there should be NO excuse as to why people would be making silly typos that could result in "accidental" terms being spurred out (unless of course, you have butter fingers, in which case, carry on).

3. There are ways to make the system specifically target words that are offensive. I've seen it done with Discord bots, and if they can do it, why not DB?


1.) no no it is not lol. The more it evolves the worse it has become. Have you seen the car AIs for autopilot? they have killed over a dozen people since their inception and tests.

2.) You dont know what we have plans for. I do but you dont.

3.) Specifically targeting words is the exact problem with the suggestion. all words can be used in a non offensive manner and context. yes including the n word. I myself have posted the n word on the site in some form that nobody can actually see and your bot system would get every single judge banned. 2 friends could also just be having a conversation that just happens to include f bombs and your feature would target that as well.

Re: Automate freezes/bans when someone uses racist slurs or any mentions of self-harm

Posted: Sun Oct 23, 2022 11:49 pm
by parhelia_0000
Genexwrecker wrote:you have the ability to block users you do not wish to interract with. If they say 1 or 2 words to you then ignoring them and blocking them is an incredible option. Banning or freezing people for 1 insult or cuss word is not only extremely draconian but not how the world works anywhere outside of socialist states. Nobody is going to get arrested for calling somebody the n word once while passing them by on the street. We have rules and guidelines and they are actually very reasonable. harassing people generally gets you a warning then a temporary freeze. if you constantly commit the same infraction and show no intention to stop after several chances and punishments then you get removed from the site entirely. When you block harassers you take away all their power this is why we have implemented the feature so that we dont need to freeze everyone on the site constantly. If I froze everyone who called a person noob or moron or the r word half the site would be permanently banned at this point.

While blocking people is a valid option, it won't stop them from slandering me in front of everyone else in DB. I've unfortunately seen too many cases of people either using Duel Notes or Statuses to brag about their wins and call people they beat "noobs/losers/any other derogatory term, etc." leading to everyone else joining in the bullying party.

And I'm not even talking about just myself. I've even seen other duelists who are genuinely looking for fair games get attacked like that too, such as those who are looking to duel in old DM/GX season formats.

Re: Automate freezes/bans when someone uses racist slurs or any mentions of self-harm

Posted: Sun Oct 23, 2022 11:52 pm
by Renji Asuka
Genexwrecker wrote:
Renji Asuka wrote:
parhelia_0000 wrote:1. AI technology is getting better and better with every iteration. Ever heard of Moore's Law? With faster processing technology, automation becomes more and more efficient, and as such, we can rule out the possibility that the AI will make mistakes.

2. Since DB is originally designed to be a desktop platform and there's no plans for mobile compatibility, there should be NO excuse as to why people would be making silly typos that could result in "accidental" terms being spurred out (unless of course, you have butter fingers, in which case, carry on).

3. There are ways to make the system specifically target words that are offensive. I've seen it done with Discord bots, and if they can do it, why not DB?

Meanwhile Google, Twitter, and Facebook use AI to ban people and 9/10, it was wrongful termination

you forgot youtube

Google owns youtube so :P

Re: Automate freezes/bans when someone uses racist slurs or any mentions of self-harm

Posted: Sun Oct 23, 2022 11:56 pm
by Genexwrecker
parhelia_0000 wrote:
Genexwrecker wrote:you have the ability to block users you do not wish to interract with. If they say 1 or 2 words to you then ignoring them and blocking them is an incredible option. Banning or freezing people for 1 insult or cuss word is not only extremely draconian but not how the world works anywhere outside of socialist states. Nobody is going to get arrested for calling somebody the n word once while passing them by on the street. We have rules and guidelines and they are actually very reasonable. harassing people generally gets you a warning then a temporary freeze. if you constantly commit the same infraction and show no intention to stop after several chances and punishments then you get removed from the site entirely. When you block harassers you take away all their power this is why we have implemented the feature so that we dont need to freeze everyone on the site constantly. If I froze everyone who called a person noob or moron or the r word half the site would be permanently banned at this point.

While blocking people is a valid option, it won't stop them from slandering me in front of everyone else in DB. I've unfortunately seen too many cases of people either using Duel Notes or Statuses to brag about their wins and call people they beat "noobs/losers/any other derogatory term, etc." leading to everyone else joining in the bullying party.

And I'm not even talking about just myself. I've even seen other duelists who are genuinely looking for fair games get attacked like that too, such as those who are looking to duel in old DM/GX season formats.
and when that occurs report it and we will act accordingly

Re: Automate freezes/bans when someone uses racist slurs or any mentions of self-harm

Posted: Mon Oct 24, 2022 12:08 am
by Lil Oldman
Genexwrecker wrote:
parhelia_0000 wrote:
Christen57 wrote:
There's no need for you or anyone else to be "protected" from toxic users on the site, unless you simply stay off the site. If people are being too toxic, you let the staff handle it, but the staff cannot and will not stop that toxicity from starting in the first place — they can only, and will only, punish it.

Also, Genexwrecker's right. Relying on artificial intelligence to act as a judge in any duel is nowhere near feasible at this time, as we haven't reached that point where artificial intelligence is advanced enough, and suitable enough, for that. Too many things would go wrong.

What if someone says a bad word completely by accident, like saying the F-word when they meant to say "Duck" because they accidentally hit the F key (right next to the D key on the keyboard) instead of the D key, or saying "Shit" when they meant to say "Shut" or "Shot"?

What if someone says a word that the system thinks is a bad word but really isn't (which does happen in customs, when the system was mistaking harmless words like "Buggy," "Thorny," "Cucumber," "Pussycat," and "Leafage" as bad words in the past and still sometimes mistakes harmless words as bad words)? I currently can't make any custom with "pussycat" in it (it will say it's inappropriate for public use) even though duelingbook has a card called Nekogal #1 whose flavor text literally says "A pussycat-fairy. Contrary to her lovely beauty, she claws on her enemies."

These are things artificial intelligence can't take into consideration (that it would need to) because it's not yet advanced enough to.

1. AI technology is getting better and better with every iteration. Ever heard of Moore's Law? With faster processing technology, automation becomes more and more efficient, and as such, we can rule out the possibility that the AI will make mistakes.

2. Since DB is originally designed to be a desktop platform and there's no plans for mobile compatibility, there should be NO excuse as to why people would be making silly typos that could result in "accidental" terms being spurred out (unless of course, you have butter fingers, in which case, carry on).

3. There are ways to make the system specifically target words that are offensive. I've seen it done with Discord bots, and if they can do it, why not DB?


1.) no no it is not lol. The more it evolves the worse it has become. Have you seen the car AIs for autopilot? they have killed over a dozen people since their inception and tests.

2.) You dont know what we have plans for. I do but you dont.

3.) Specifically targeting words is the exact problem with the suggestion. all words can be used in a non offensive manner and context. yes including the n word. I myself have posted the n word on the site in some form that nobody can actually see and your bot system would get every single judge banned. 2 friends could also just be having a conversation that just happens to include f bombs and your feature would target that as well.

Playing devil's advocate here on No. 1. How many people have regular drivers killed? The AI driver argument is more of a confirmation bias more than anything.

Re: Automate freezes/bans when someone uses racist slurs or any mentions of self-harm

Posted: Tue Oct 25, 2022 1:38 am
by PENMASTER
if you listen to what someone on the internet tells you to do to yourself its your own fault honestly.
ai is shit honestly and takes way to many resources when we already have the solution of man the fuck up and don't end up in one of those free speech slippery slopes with aI

Re: Automate freezes/bans when someone uses racist slurs or any mentions of self-harm

Posted: Tue Oct 25, 2022 12:44 pm
by Renji Asuka
PENMASTER wrote:if you listen to what someone on the internet tells you to do to yourself its your own fault honestly.
ai is shit honestly and takes way to many resources when we already have the solution of man the fuck up and don't end up in one of those free speech slippery slopes with aI

That's a really bad take. You never know the mindset of the person behind the screen. For all you know they could have depression and someone telling them to kill themselves may push them over the edge.

Also if you tell a person to kill themself and they do, you're legally responsible in the eyes of the law.

Re: Automate freezes/bans when someone uses racist slurs or any mentions of self-harm

Posted: Tue Oct 25, 2022 12:44 pm
by Renji Asuka
PENMASTER wrote:if you listen to what someone on the internet tells you to do to yourself its your own fault honestly.
ai is shit honestly and takes way to many resources when we already have the solution of man the fuck up and don't end up in one of those free speech slippery slopes with aI

That's a really bad take. You never know the mindset of the person behind the screen. For all you know they could have depression and someone telling them to kill themselves may push them over the edge.

Also if you tell a person to kill themself and they do, you're legally responsible in the eyes of the law.

Re: Automate freezes/bans when someone uses racist slurs or any mentions of self-harm

Posted: Tue Oct 25, 2022 1:06 pm
by parhelia_0000
Renji Asuka wrote:
PENMASTER wrote:if you listen to what someone on the internet tells you to do to yourself its your own fault honestly.
ai is shit honestly and takes way to many resources when we already have the solution of man the fuck up and don't end up in one of those free speech slippery slopes with aI

That's a really bad take. You never know the mindset of the person behind the screen. For all you know they could have depression and someone telling them to kill themselves may push them over the edge.

Also if you tell a person to kill themself and they do, you're legally responsible in the eyes of the law.

Thank you. This is what I've been trying to say, especially considering I lost some of my friends to suicide in the past so this topic really hits me home.

It really surprises me as to how many people are so insensitive to other people when it comes to things like this, and the last thing I want DB to turn into is a toxic environment where death threats and suicide encouragements become a norm.

Please, listen to reason about this. I made this suggestion topic with reason, and I hope that at least one of the admins will understand the severity regarding these matters.