I was trying to understand if I use Dispatchers.IO instead of Dispatchers.Default using this code
runBlocking {
val list = mutableListOf<Job>()
val time1 = System.currentTimeMillis()
for (i in 1..100){
val job = lifecycleScope.launch(Dispatchers.IO) {
val c = "c$i"
Log.i("CoroutineRunnerLog", "Launched a coroutine $c on Thread : ${Thread.currentThread().name}")
for (j in 1..10){
Log.d("CoroutineRunnerLog", "Coroutine $c, j = $j, Thread : ${Thread.currentThread().name}")
}
}
list.add(job)
}
list.joinAll()
val time2 = System.currentTimeMillis()
Log.e("CoroutineRunnerLog", "difference is = ${time2 - time1}")
}
Since Dispatchers.IO can expand upto 64 threads,and Dispatchers.Default has max threads equal to CPU cores(8 in my case)
Expected behaviour should be Dispatcher.IO should have better performance(I am not sure here)
I just want to understand why Dispatchers.Default is running faster than Dispatchers.IO despite of having lesser number of threads in its thread pool.
Output :
Dispatchers.IO = 200 - 300 ms
Dispatchers.Default = 80 - 100 ms
I was trying to understand if I use Dispatchers.IO instead of Dispatchers.Default using this code
runBlocking {
val list = mutableListOf<Job>()
val time1 = System.currentTimeMillis()
for (i in 1..100){
val job = lifecycleScope.launch(Dispatchers.IO) {
val c = "c$i"
Log.i("CoroutineRunnerLog", "Launched a coroutine $c on Thread : ${Thread.currentThread().name}")
for (j in 1..10){
Log.d("CoroutineRunnerLog", "Coroutine $c, j = $j, Thread : ${Thread.currentThread().name}")
}
}
list.add(job)
}
list.joinAll()
val time2 = System.currentTimeMillis()
Log.e("CoroutineRunnerLog", "difference is = ${time2 - time1}")
}
Since Dispatchers.IO can expand upto 64 threads,and Dispatchers.Default has max threads equal to CPU cores(8 in my case)
Expected behaviour should be Dispatcher.IO should have better performance(I am not sure here)
I just want to understand why Dispatchers.Default is running faster than Dispatchers.IO despite of having lesser number of threads in its thread pool.
Output :
Dispatchers.IO = 200 - 300 ms
Dispatchers.Default = 80 - 100 ms
Running CPU-bound jobs on more threads than the number of CPU cores results in excessive context switching and thread management overhead, increasing the total execution time.
Since your jobs only compute without waiting, using Dispatchers.Default
reduces the total execution time.
However, if you introduce waiting (e.g., Thread.sleep(500)
) in your jobs, Dispatchers.IO
will complete the total work faster by allowing other tasks to utilize the freed-up threads while waiting.
Expected behaviour should be Dispatcher.IO should have better performance
This is fundamentally incorrect. If you have 8 cores running CPU bound tasks you should expect worse performance when using more than 8 threads simultaneously. Every extra thread entails context switching that would not occur if there were 8 threads or less running on the CPU.
Dispatchers.IO can expand upto 64 threads
Dispatchers.IO has a lot of threads because these threads are designed to do blocking calls (e.g network, reading from the disk)
Using a lot of threads only has an advantage if those threads are actually running on the CPU a short time each.
To sum up, A CPU running on 100% has no benefit from more threads than cores, quite literally the opposite.
On a final note, I'd like to stress that it is imperative that you don't block threads on the Dispatchers.Default dispatcher. Since there are a limit amount of threads on that dispatcher, any blocked thread will very much hurt the performance of your code.
If you plan to call a blocking operation, use withContext
as such:
withContext(Dispatchers.IO){
// blocking code goes here
}
I would say it because the Context Switching Overhead in Dispatchers.IO
In your case:
What Should You Use?
If your workload doesn't heavily rely on I/O blocking, Dispatchers.Default will usually be faster due to fewer threads and less context switching overhead.