When you have pretty much every toon unlocked, someone will need those +5s. If you have enough +20s for every toon... well, you are friggin amazing!

If you're at the point in the game where you can't afford to just max every mod, then maybe you don't need that many mods yet, but you will later. And fortunately at that point you will have the credits to do so.

When you have pretty much every toon unlocked, someone will need those +5s. If you have enough +20s for every toon... well, you are friggin amazing!

If you're at the point in the game where you can't afford to just max every mod, then maybe you don't need that many mods yet, but you will later. And fortunately at that point you will have the credits to do so.

I never bothered to find out, do all mods count against your inventory or just the unequipped ones.

You're leveling up your mods wrongly hence you're wasting credits.

You're suppose to level it by stages of 3 levels each to see if a speed rolls out onto the list.

White = Lv12
Green = Lv9
Blue = Lv6
Purple = Lvl 3

If you don't see any speed stat, stop and try another mod. If you do, proceed to slice and level up by 3 each time to see if you can at least get 2-3 stacks, depending on the mod tier and your preference.

I've personally modded 7 white mods up with each having 20-26 speed. Its all about knowing when to proceed or stop for resource management.

Mods are the worst part of the game. 99% luck (yes, knowing when to roll/slice plays into it, but what rolls is pure luck), which means 1% strategy. I spent 150 gems tonight refreshing mod energy for double drops and sold every mod I got because not a single one had speed. Also, does it seem like some stats roll more consistently than others? I think defense and tenacity roll 50% of the time, while speed is maybe 10-15% of the time. It definitely doesn’t seem like a flat 25% for every stat to roll.

Also, does it seem like some stats roll more consistently than others? I think defense and tenacity roll 50% of the time, while speed is maybe 10-15% of the time. It definitely doesn’t seem like a flat 25% for every stat to roll.

Naw man. Random number generation is the perfect scenario to fool people into thinking there are patterns beyond odds occurrances rooted in equal probabilities. Try playing Yahtzee. I have had so many speed rolls in the past 2 months that if this were Yahtzee I would say, "I'm good at rolling speeds", but in reality the most accurate way of describing my occurances would be, "I happen to have rolled a bunch of speeds."

The thing is that if you are not an [Removed] then you see these patterns. [Removed] miss them (believe it or not). Now where your intellect really has the opportunity to shine is: do you realize that the patterns you are recognizing are really probabilistically equal and constantly swapping their occurances so long as you roll enough times? That is where the person that recognizes patterns has the ability to be a person that draws proper conclusions vs. erroroneous ones. It's hot today, global warming or weather I don't like or both to some extent. Data is always real. Your conclusions may not be.

Mods are the worst part of the game. 99% luck (yes, knowing when to roll/slice plays into it, but what rolls is pure luck), which means 1% strategy.

Knowing when to roll/slice is way more than 1% of the battle. And more importantly, is the only thing you control. The better you control it, the more chances you get for good luck.

@Mephisto_style How big of a sample size do you need though? I have 500+ rolls tracked and speed procs at a 20% rate over that sample. I would have expected 25% as you said. It's possible that I'm on a bad trend (expecting 125, got 100), not a stats person to know how big of a sample I need and too lazy to figure it out.

This is also almost exclusively speed sets. I wouldn't be surprised if speed secondaries roll lower on speed sets than on other sets in order to equalize the value of each mod set over time. Meaning that the 10% base speed increase can be made up over time with non speed sets. This would be awesome, if true, because of the benefits of other primary sets (especially offense and health).

@cannonfodder_iv
Assuming a binomial distribution with mean 0.25 the probability of exactly, or fewer than, 100 successes out of 500 trials is p = .0049037. So you are a ~1 in 200 case. Now in the population of the whole game that's not surprising at all, but it may be considered a surprising result if the population size is limited just to "people playing swgoh and tracking speed slices".

Here's the interesting part though (to me at least). If you (or anyone else tracking drops) have ever checked your success rate at any point during your tracking and thought "I must just need a bigger sample size" then you've committed a form of scientific malpractice that is very very common and almost exclusively unintentional, called p-hacking. It's ultra common in fields like social sciences, epidemiology, nutritional sciences, psychology but common to all research that involves some statistics and results in silly news headlines like "Scientific study shows chocolate helps you lose weight".

To give a concrete example. If someone tracked 150 slices and got 24 hits (19.2% success) this would be a p = .00540223 which again is a ~1 in 200 case. Following this further tracking is done to increase the sample size. To get to 100 successes out of 500 slices this person would need 76 out of 350 slices (21.7% success) which is a p = .08575196 so a ~1 in 11 occurrence. In order to get the mean result of 25% after N=500 you would need to have sliced speed 101 times and the probability of exactly, or more than, 101 out of 350 (28.9% success) is p = .05592093 which is ~1 in 18 occurrence so returning to the mean is quite a bit less probable than remaining below it.

None of this changes the fact that your data was a 1 in 200 occurrence and always would have been regardless of if you checked at any point during the tracking, the point is in the reporting of the data and who reports data. In science what occurs is people check a hypothesis and see a trend but note that it's not significant so collect more data. When combining the new data with the old, the probability of a "statistically significant" result increases and the paper gets published with an incorrect conclusion. In swgoh tracking, multiple people track a few 10s or 100s of drops and see the expected rate so stop tracking and never report. Those (not necessarily you) that track a few 10s of drops and get a rubbish droprate (but not one that is statistically significant) come on here and tell people and get the response "just track more data". I cringe at this because then, said person goes away and tracks more data, and then later rolls that into the previous data resulting in a much bigger sample size that will have a very high probability of showing a large difference from the true rate (whatever it is).

Anyway, been wanting to get that off my chest for a while and isn't necessarily directed at you just since you asked about the stats.

not a stats person to know how big of a sample I need and too lazy to figure it out.

To answer this part specifically there is no sample size that is too small. This is a common misconception. You can only have a sample size that is "too small to show that the hypothesis .... can be rejected with a confidence level of ...."

To show that the drop rates in this game are not equal to 99.99999% can be done with a sample size of about 5 sims with an extremely high confidence level. To show that the drop rate in this game is different from 33% by more than 5% with a confidence level over 99% you need about 500 sims. To show that the drop rate in this game is different from 33% by more than 1% with a confidence level over 99% you probably need at least 10,000 sims.

That is, if your given your aim is to reject the hypothesis that the drop rate is 33%. Things are different if you just wish to report the value that you measured with appropriate confidence intervals in which case the confidence interval is dependent on sample size and other assumptions made.

@cannonfodder_iv
Assuming a binomial distribution with mean 0.25 the probability of exactly, or fewer than, 100 successes out of 500 trials is p = .0049037. So you are a ~1 in 200 case. Now in the population of the whole game that's not surprising at all, but it may be considered a surprising result if the population size is limited just to "people playing swgoh and tracking speed slices".

Here's the interesting part though (to me at least). If you (or anyone else tracking drops) have ever checked your success rate at any point during your tracking and thought "I must just need a bigger sample size" then you've committed a form of scientific malpractice that is very very common and almost exclusively unintentional, called p-hacking. It's ultra common in fields like social sciences, epidemiology, nutritional sciences, psychology but common to all research that involves some statistics and results in silly news headlines like "Scientific study shows chocolate helps you lose weight".

To give a concrete example. If someone tracked 150 slices and got 24 hits (19.2% success) this would be a p = .00540223 which again is a ~1 in 200 case. Following this further tracking is done to increase the sample size. To get to 100 successes out of 500 slices this person would need 76 out of 350 slices (21.7% success) which is a p = .08575196 so a ~1 in 11 occurrence. In order to get the mean result of 25% after N=500 you would need to have sliced speed 101 times and the probability of exactly, or more than, 101 out of 350 (28.9% success) is p = .05592093 which is ~1 in 18 occurrence so returning to the mean is quite a bit less probable than remaining below it.

None of this changes the fact that your data was a 1 in 200 occurrence and always would have been regardless of if you checked at any point during the tracking, the point is in the reporting of the data and who reports data. In science what occurs is people check a hypothesis and see a trend but note that it's not significant so collect more data. When combining the new data with the old, the probability of a "statistically significant" result increases and the paper gets published with an incorrect conclusion. In swgoh tracking, multiple people track a few 10s or 100s of drops and see the expected rate so stop tracking and never report. Those (not necessarily you) that track a few 10s of drops and get a rubbish droprate (but not one that is statistically significant) come on here and tell people and get the response "just track more data". I cringe at this because then, said person goes away and tracks more data, and then later rolls that into the previous data resulting in a much bigger sample size that will have a very high probability of showing a large difference from the true rate (whatever it is).

Anyway, been wanting to get that off my chest for a while and isn't necessarily directed at you just since you asked about the stats.

I don't disagree with what you have said, except that you don't seem to conveying that you do indeed need a sufficient sample size to model reality. To not have that defeats the point of statistics and brings us back into the realm of "I think because some anecdotal evidence I've seen fulfills my confirmation bias."

In the case of mod slicing and perception of data, I don't believe any complainers are collecting and analyzing the data. So I would thus say, "You need more n." Right now they are at 0.

@cannonfodder_iv
Assuming a binomial distribution with mean 0.25 the probability of exactly, or fewer than, 100 successes out of 500 trials is p = .0049037. So you are a ~1 in 200 case. Now in the population of the whole game that's not surprising at all, but it may be considered a surprising result if the population size is limited just to "people playing swgoh and tracking speed slices".

Here's the interesting part though (to me at least). If you (or anyone else tracking drops) have ever checked your success rate at any point during your tracking and thought "I must just need a bigger sample size" then you've committed a form of scientific malpractice that is very very common and almost exclusively unintentional, called p-hacking. It's ultra common in fields like social sciences, epidemiology, nutritional sciences, psychology but common to all research that involves some statistics and results in silly news headlines like "Scientific study shows chocolate helps you lose weight".

To give a concrete example. If someone tracked 150 slices and got 24 hits (19.2% success) this would be a p = .00540223 which again is a ~1 in 200 case. Following this further tracking is done to increase the sample size. To get to 100 successes out of 500 slices this person would need 76 out of 350 slices (21.7% success) which is a p = .08575196 so a ~1 in 11 occurrence. In order to get the mean result of 25% after N=500 you would need to have sliced speed 101 times and the probability of exactly, or more than, 101 out of 350 (28.9% success) is p = .05592093 which is ~1 in 18 occurrence so returning to the mean is quite a bit less probable than remaining below it.

None of this changes the fact that your data was a 1 in 200 occurrence and always would have been regardless of if you checked at any point during the tracking, the point is in the reporting of the data and who reports data. In science what occurs is people check a hypothesis and see a trend but note that it's not significant so collect more data. When combining the new data with the old, the probability of a "statistically significant" result increases and the paper gets published with an incorrect conclusion. In swgoh tracking, multiple people track a few 10s or 100s of drops and see the expected rate so stop tracking and never report. Those (not necessarily you) that track a few 10s of drops and get a rubbish droprate (but not one that is statistically significant) come on here and tell people and get the response "just track more data". I cringe at this because then, said person goes away and tracks more data, and then later rolls that into the previous data resulting in a much bigger sample size that will have a very high probability of showing a large difference from the true rate (whatever it is).

Anyway, been wanting to get that off my chest for a while and isn't necessarily directed at you just since you asked about the stats.

I don't disagree with what you have said, except that you don't seem to conveying that you do indeed need a sufficient sample size to model reality. To not have that defeats the point of statistics and brings us back into the realm of "I think because some anecdotal evidence I've seen fulfills my confirmation bias."

In the case of mod slicing and perception of data, I don't believe any complainers are collecting and analyzing the data. So I would thus say, "You need more n." Right now they are at 0.

Sorry that wasn't my intention either you need a sufficient sample size to make a claim of course! You just need to decide on

1) what your claim is
2) what your assumptions are
3) what confidence level you wish to reach
4) calculate the required sample size needed for 1), 2) and 3)

and then collect data and not change any of the above. Rather high bar to meet haha!

Sorry that wasn't my intention either you need a sufficient sample size to make a claim of course! You just need to decide on

1) what your claim is
2) what your assumptions are
3) what confidence level you wish to reach
4) calculate the required sample size needed for 1), 2) and 3)

and then collect data and not change any of the above. Rather high bar to meet haha!

That was exactly why I chose to not answer his question directly.

## Replies

If you're at the point in the game where you can't afford to just max every mod, then maybe you don't need that many mods yet, but you will later. And fortunately at that point you will have the credits to do so.

I never bothered to find out, do all mods count against your inventory or just the unequipped ones.

My stuff

Yeah, it just is going to make me put a 15 speed mod on princess leia to use it

You can expect it when you least expect it?

You're suppose to level it by stages of 3 levels each to see if a speed rolls out onto the list.

White = Lv12

Green = Lv9

Blue = Lv6

Purple = Lvl 3

If you don't see any speed stat, stop and try another mod. If you do, proceed to slice and level up by 3 each time to see if you can at least get 2-3 stacks, depending on the mod tier and your preference.

I've personally modded 7 white mods up with each having 20-26 speed. Its all about knowing when to proceed or stop for resource management.

Exactly.

But if he got a boost at 3 or 6 then he'd have kept going to 9 or 12 or 15.

So if the mod starts out bad at 3/6 he just doesn't bother with seeing if his luck would change.

Not entirely a bad idea.

With 3 good stats to choose from & you get the 4th on your first 2 rolls, I can see why you might cut your losses & move on to a different mod.

I mean, if it had even been Protection & Crit Chance, I could see keep going, but after a pair of Protection %, yeah, maybe save the credits.

This is why I disagree with anyone who claims mods are 100% luck.

Pro tip: if you long press the Battle button in either arena you get the option to spend 2.5M credits to boost your entire team by +50 speed.

This is why I stopped farming mods and started saving credits. I need those credits more than a potential +17.

We need to keep this stuff a secret, man.

I want to know how many players tried that after reading it! haha

Naw man. Random number generation is the perfect scenario to fool people into thinking there are patterns beyond odds occurrances rooted in equal probabilities. Try playing Yahtzee. I have had so many speed rolls in the past 2 months that if this were Yahtzee I would say, "I'm good at rolling speeds", but in reality the most accurate way of describing my occurances would be, "I happen to have rolled a bunch of speeds."

The thing is that if you are not an [Removed] then you see these patterns. [Removed] miss them (believe it or not). Now where your intellect really has the opportunity to shine is: do you realize that the patterns you are recognizing are really probabilistically equal and constantly swapping their occurances so long as you roll enough times? That is where the person that recognizes patterns has the ability to be a person that draws proper conclusions vs. erroroneous ones. It's hot today, global warming or weather I don't like or both to some extent. Data is always real. Your conclusions may not be.

Let's stop bypassing the filter shall we? ~RtasKnowing when to roll/slice is way more than 1% of the battle. And more importantly, is the only thing you control. The better you control it, the more chances you get for good luck.

This is also almost exclusively speed sets. I wouldn't be surprised if speed secondaries roll lower on speed sets than on other sets in order to equalize the value of each mod set over time. Meaning that the 10% base speed increase can be made up over time with non speed sets. This would be awesome, if true, because of the benefits of other primary sets (especially offense and health).

Assuming a binomial distribution with mean 0.25 the probability of exactly, or fewer than, 100 successes out of 500 trials is p = .0049037. So you are a ~1 in 200 case. Now in the population of the whole game that's not surprising at all, but it may be considered a surprising result if the population size is limited just to "people playing swgoh

andtracking speed slices".Here's the interesting part though (to me at least). If you (or anyone else tracking drops) have ever checked your success rate at any point during your tracking and thought "I must just need a bigger sample size" then you've committed a form of scientific malpractice that is very very common and almost exclusively unintentional, called p-hacking. It's ultra common in fields like social sciences, epidemiology, nutritional sciences, psychology but common to all research that involves some statistics and results in silly news headlines like "Scientific study shows chocolate helps you lose weight".

To give a concrete example. If someone tracked 150 slices and got 24 hits (19.2% success) this would be a p = .00540223 which again is a ~1 in 200 case. Following this further tracking is done to increase the sample size. To get to 100 successes out of 500 slices this person would need 76 out of 350 slices (21.7% success) which is a p = .08575196 so a ~1 in 11 occurrence. In order to get the mean result of 25% after N=500 you would need to have sliced speed 101 times and the probability of exactly, or more than, 101 out of 350 (28.9% success) is p = .05592093 which is ~1 in 18 occurrence so returning to the mean is quite a bit less probable than remaining below it.

None of this changes the fact that your data was a 1 in 200 occurrence and always would have been regardless of if you checked at any point during the tracking, the point is in the reporting of the data and who reports data. In science what occurs is people check a hypothesis and see a trend but note that it's not significant so collect more data. When combining the new data with the old, the probability of a "statistically significant" result increases and the paper gets published with an incorrect conclusion. In swgoh tracking, multiple people track a few 10s or 100s of drops and see the expected rate so stop tracking and never report. Those (not necessarily you) that track a few 10s of drops and get a rubbish droprate (but not one that is statistically significant) come on here and tell people and get the response "just track more data". I cringe at this because then, said person goes away and tracks more data, and then later rolls that into the previous data resulting in a much bigger sample size that will have a very high probability of showing a large difference from the true rate (whatever it is).

Anyway, been wanting to get that off my chest for a while and isn't necessarily directed at you just since you asked about the stats.

To answer this part specifically there is no sample size that is too small. This is a common misconception. You can only have a sample size that is

"too small to show that the hypothesis .... can be rejected with a confidence level of ...."To show that the drop rates in this game are not equal to 99.99999% can be done with a sample size of about 5 sims with an extremely high confidence level. To show that the drop rate in this game is different from 33% by more than 5% with a confidence level over 99% you need about 500 sims. To show that the drop rate in this game is different from 33% by more than 1% with a confidence level over 99% you probably need at least 10,000 sims.

That is, if your given your aim is to reject the hypothesis that the drop rate is 33%. Things are different if you just wish to report the value that you measured with appropriate confidence intervals in which case the confidence interval is dependent on sample size and other assumptions made.

I don't disagree with what you have said, except that you don't seem to conveying that you do indeed need a sufficient sample size to model reality. To not have that defeats the point of statistics and brings us back into the realm of "I think because some anecdotal evidence I've seen fulfills my confirmation bias."

In the case of mod slicing and perception of data, I don't believe any complainers are collecting and analyzing the data. So I would thus say, "You need more n." Right now they are at 0.

Sorry that wasn't my intention either you need a sufficient sample size to make a claim of course! You just need to decide on

1) what your claim is

2) what your assumptions are

3) what confidence level you wish to reach

4) calculate the required sample size needed for 1), 2) and 3)

and then collect data and not change any of the above. Rather high bar to meet haha!

That was exactly why I chose to not answer his question directly.