Threading in C#: 7 Things you should always remember about

Have you ever spent a few hours on trying to debug a non-deterministic problem occurring in your multi-threading application? If so, then you should definitely read this article. If not it is anyway a good way of revising your current knowledge about threading challenges in C#.  Being aware of some common facts about threading can help you considerably in building well-designed, error-proof multi-threading applications in the future. 

1. Threads share data if they have a common reference to the same object instance

The code below:

    class SomeClass
    {
        private bool _isWorkDone;
        static void Main(string[] args)
        {
            SomeClass someClass = new SomeClass();
            Thread newThread = new Thread(someClass.DoWork);
            newThread.Start();
            someClass.DoWork();
            Console.Read();
        }
        void DoWork()
        {
            if (!_isWorkDone)
            {
                _isWorkDone = true;
                Console.WriteLine("Work done");
            }
        }
    }

When executed, results in: Work done printed on the screen. As you can see in the  example above, both threads (the main one and newThread) call DoWork() method on the same instance of SomeClass. As a result, although the _isWorkDone field is non-static, they share it. In consequence “Work done” is printed on the screen just once, whereas a programmer who is not aware of the above would expect it be printed twice.

2. “Finally” blocks in background threads are not executed when the process terminates

The code below:

    class SomeClass
    {
        private bool _isWorkDone;
        static void Main(string[] args)
        {
            SomeClass someClass = new SomeClass();
            Thread backgroundThread = new Thread(someClass.DoWork);
            backgroundThread.IsBackground = true;
            backgroundThread.Start();
            Console.WriteLine("Closing the program....");
        }
        void DoWork()
        {
            try
            {
                Console.WriteLine("Doing some work...");
                Thread.Sleep(1000);
            }
            finally
            {
                Console.WriteLine("This should be always executed");
            }
        }
    }

When executed, results in:   Doing some work… Closing the program…   printed on the screen. As you can see in the example above, when the process terminates (because the main thread has finished its execution) the “finally” block in the background thread is not executed. Not being aware of this this might cause big troubles in case there is “disposal” work to be done at the end, such as closing streams, releasing resources or deleting some temporary files.

3. Captured values in lambda expressions are shared as well

You could presume that the below code:

class SomeClass
{
    static void Main(string[] args)
    {
        for (int i = 0; i < 10; i++)
        {
            Thread thread = new Thread(()=> Console.Write(i));
            thread.Start();
        }
        Console.Read();
    }
 }

Should result in the following output: 0123456789 Well it doesn’t. The result is completely non-deterministic! The trick here is that the i variable refers to the same memory location throughout the lifetime of the “for” loop. As a result each thread calls the method Console.Write on the same variable which is changing while it’s running. The solution to the above is to use a temporary variable :

for (int i = 0; i < 10; i++)
{
        int temp = i ;
        Thread thread = new Thread(()=> Console.Write(temp));
        thread.Start();
 }

4. Locking does not restrict access to the synchronizing object itself in any way.

This means that, if one thread calls: lock(x) and another thread calls x.ToString() the latter will not be blocked. In other words: objects used for locking are lockers not being locked.

5. Try/catch/finally blocks in scope when a thread is created, are of no relevance to the thread when it starts executing

You should be aware that the code below:

class SomeClass
{
      static void Main(string[] args)
      {
          try
          {
              Thread thread = new Thread( ()=> Divide(10,0));
              thread.Start();
          }
          catch (Exception ex)
          {
              Console.WriteLine("An exception occured");
          }
      }
      static void Divide(int x, int y)
      {
          int z = x / y;
      }
}

Will not result in an exception being caught in the “catch” block of Main method. It will unfortunately remain uncaught and will cause the program to shut down. The most natural way of solving the above problem is of course to move the try/catch block to the Divide method.

6. If an object is thread-safe it does not imply that you don’t need to lock around accessing it

Take a look at the code below:

if (!_list.Contains(newItem))
{
    _list.Add(newItem);
}

Let’s say that the class of _list is fully thread-safe. Nevertheless, there is still a possibility that between checking and adding to the list another thread has already added a certain item. The above example shows that having a thread-safe class does not mean that when you use it the code is still thread-safe.

7. Your program’s instructions can be reordered by compiler, CLR or CPU to improve efficiency.

This one might really be tricky for developers who are not aware of this. To explain this process, let’s firstly take a look at the following piece of code:

class SomeClass
{
        private int _value;
        private bool _done;
        void A()
        {
            _value = 1;
            _done = true;
        }
        void B()
        {
            if (_done)
            {
                Console.WriteLine(_value);
            }
        }
}

The question is: is this possible, assuming that A and B are ran concurrently on different threads, that B will write “0” on a screen? Logically thinking it is not: we cannot enter Console.WriteLine before _done is set to true. And to set _done to true we firstly need to assign 1 to _value field. Surprisingly though it is possible, because:

  • The compiler, CLR or CPU may reorder your program’s instructions to improve its efficiency
  • The compiler, CLR or CPU may introduce caching optimizations, such assignments to variables won’t be visible to other threads right away.

Okay, so now the question arises: how to solve such a challenge? Well there are at least a few solutions, but the one that is usually most preferred is a creation of memory fences. Shortly speaking, a full memory barrier (fence), when used, prevents any kind of instruction reordering or caching around this barrier. In C# you call Thread.Barrier to generate such a fence. In the above example the following code would solve our problem:

class SomeClass
{
        private int _value;
        private bool _done;
        void A()
        {
            _value = 1;
            Thread.MemoryBarrier();
            _done = true;
        }
        void B()
        {
            if (_done)
            {
                Thread.MemoryBarrier();
                Console.WriteLine(_value);
            }
        }
}

If you would like to know more about different challenges and problems that you can encounter while building multi-threading applications I strongly recommend  a free e-book by Joseph Albahari: Threading in C#. If you know some other interesting scenarios connected with threading in C# feel free to share them with us by leaving a comment! 

Tags:

16 comments

  1. In example 1, there is racing. So some programmers will see “Work Done” printed twice. Or at least there is a chance this to happen. The DoWork method is not thread safe, so the “new Thread” or the Main can evaluate condition “almost” in the same time.
    So the method should look like:
    void DoWork()
    {
    lock(syncLock) {
    if (!_isWorkDone)
    {
    _isWorkDone = true;
    Console.WriteLine(“Work done”);
    }
    }
    }

    where syncLock is private object member of SomeClass.

    1. First of all: thanks for reading my post and leaving your feedback. Regarding your doubt about the temp variable: The variable temp is local to each loop iteration. Therefore, each thread captures a different memory location. Does this answer your question or you meant something different with stating that it does not help at all?

      1. Hey 🙂 I mean I tried to run it and it didn’t work. The numbers were still in weird orders each time.

        1. There is no guarantee, that the numbers will be displayed in order. When you are creating a thread and then calling the Start method, then the system (Windows) queues your thread to execute. But immediately in the next iteration you do the same (create a thread and call Start). Now the system queues your thread and executes immediately the last created thread because it’s for example ‘on top of the memory’.
          The temp variable just guarantees, that each thread will point to a unique value which remembers the value of ‘i’, otherwise few threads could point to the same value of ‘i’. To sum up: it’s the systems decision which thread should be executed first.

  2. Thank you for this very interesting article.

    However, point 2 is wrong. finally blocks do get executed in background threads.

    What happens in your example is that when you leave the Main() function, your unique foreground thread gets terminated, which cause the process to end. Any existing background threads are simply torn down. In other words, you don’t give the opportunity to your background thread to execute completely.

    Simply add

    backroundThread.Join()

    between the call to Start() and the end of the block and you will see “”This should be always executed” being printed.

    1. Sorry, after a closer look at your example, it seems that you were aware of this (and you added a Sleep() in you background thread code in order to delay the possible execution of the finally block).

      This point needs however to be rephrased because it suggests there is something special with the finalize, whereas the interruption can occur anywhere when the process terminates.

  3. I want to switch between threads.
    I have one code in which one function is executed in separate thread.This function triggers an event at the end and this event again calls one function which gives me an exception. “Cross thread operation not valid..etc”

    So here I want to stop the worker thread after some point and then allow this code inside the event handler to be executed as a part of main thread.

    Can you help me out with this?

  4. Some great explanations there. I haven’t come across many discussions about some of these issues, like out of order execution.

  5. what about the following case:
    Thread A:
    while (flag)
    {
    }

    Thread B:
    flag=false;

    Is it guaranteed that Thread A will end the loop, or may be for Thread A, flag remains true, due to caching.

    the question is, do I have to lock the flag variable on read and on write to ensure visibilty.

Comments are closed.