{"id":2016,"date":"2024-02-19T02:38:23","date_gmt":"2024-02-19T02:38:23","guid":{"rendered":"http:\/\/write.muthu.co\/?p=2016"},"modified":"2024-02-19T02:38:23","modified_gmt":"2024-02-19T02:38:23","slug":"ram-and-cpu-usage-optimization-in-c-code","status":"publish","type":"post","link":"http:\/\/write.muthu.co\/ram-and-cpu-usage-optimization-in-c-code\/","title":{"rendered":"Ram and CPU usage optimization in C# code"},"content":{"rendered":"\n
In response to the growing usage of our windows application and the need to enhance its performance, our development team embarked on a project to refactor our legacy C# codebase. A key focus of this effort was to streamline the application’s memory usage to ensure optimal performance and scalability. Recognizing that inefficient memory management can significantly impact the application’s responsiveness and overall user experience, we undertook thorough research and collaborative brainstorming sessions to pinpoint the most effective strategies for optimizing RAM consumption.<\/p>\n\n\n\n
Through our analysis, we uncovered several areas within the codebase where memory inefficiencies were prevalent. These inefficiencies ranged from redundant data structures to suboptimal algorithms and memory leaks. Addressing these issues became paramount in our quest to enhance the application’s efficiency and responsiveness.<\/p>\n\n\n\n
After careful consideration and experimentation, we distilled our findings into a set of best practices and optimization techniques tailored to our specific use case and technology stack. These techniques are designed not only to reduce RAM consumption but also to improve the application’s performance and stability.<\/p>\n\n\n\n
In the following sections, I will outline the key strategies that emerged from our research. These strategies represent our roadmap for optimizing RAM and CPU usage in our C# application, encompassing both low-level memory optimizations and high-level architectural improvements. By implementing these strategies conscientiously, we aim to achieve a leaner, more efficient application that can meet the evolving needs of our users while maintaining high standards of performance and reliability.<\/p>\n\n\n\n
Theoretically, a Hashset is better than a List, we decided to prove it through some benchmarking. We simulated and ran a series of operations on both a The result is as shown below: you can clearly see that the List performs poorly as the data size increases, while the Hashset takes nearly the same time for all operations <\/p>\n\n\n\n This the code we used to create this benchmark.<\/p>\n\n\n\n In summary, here is a simple difference between suboptimal and optimized approach.<\/p>\n\n\n\n While a Using a Suppose you frequently need to search for associated values based on keys. It’s common practice to check if the key exists before retrieving its value from the dictionary. While this approach is harmless, we’ve discovered a more optimal alternative: utilizing the Here’s an example illustrating the usage of In this example, the second approach using Suppose you require a data structure for first-in-first-out (FIFO) operations. This can be achieved using a List as shown in the code below:<\/p>\n\n\n\n Using a Here’s an elaboration on why using a List vs. Queue:<\/strong><\/p>\n\n\n\n Consequences of using a List as a queue:<\/strong><\/p>\n\n\n\n Benefits of using Additional considerations:<\/strong><\/p>\n\n\n\n In conclusion,<\/strong> using a Use arrays when you know the size of the data collection in advance and need fast random access to elements. They have a fixed size and occupy a contiguous block of memory. Take a look at the example below:<\/p>\n\n\n\n Array vs List<\/strong><\/p>\n\n\n\n Use StringBuilder for repeated string concatenations, especially in loops. It’s more efficient than using the Here’s a code example demonstrating the importance of StringBuilder vs. String Concatenation in RAM memory optimization:<\/p>\n\n\n\n Explanation:<\/strong><\/p>\n\n\n\n Key Points:<\/strong><\/p>\n\n\n\n Consider using structs for small, lightweight data structures that primarily hold data without complex behavior. They are value types, stored directly on the stack, and can reduce memory overhead compared to classes in certain cases.<\/p>\n\n\n\n Here’s a code example demonstrating the importance of Structs vs. Classes in RAM memory optimization:<\/p>\n\n\n\n Explanation:<\/strong><\/p>\n\n\n\n Key Points:<\/strong><\/p>\n\n\n\n Avoiding global variables in C# involves encapsulating data within classes or passing data explicitly between methods. We moved many static global variables to local variables, this change did not bring in huge improvements but we do know it did add up a bit to the overall improvements that we did. Here’s an example that demonstrates how to avoid global variables by using a class to encapsulate data:<\/p>\n\n\n\n In this example:<\/p>\n\n\n\n By using a class and encapsulating data within it, you avoid the use of global variables and promote better organization and encapsulation of your code.<\/p>\n\n\n\n Optimizing large data processing in C# often involves processing data in smaller chunks to avoid loading the entire dataset into memory at once. Here’s an example that demonstrates how to process large data in chunks:<\/p>\n\n\n\n In this example:<\/p>\n\n\n\n Adjust the In C#, finalizers are special methods that are part of the .NET garbage collection system. They are invoked by the garbage collector before an object is reclaimed, providing an opportunity to release unmanaged resources or perform cleanup operations. While finalizers can be useful for certain scenarios, it’s important to be cautious about their usage due to several reasons:<\/p>\n\n\n\n Non-deterministic Execution:<\/strong><\/p>\n\n\n\n Performance Overhead:<\/strong><\/p>\n\n\n\n Resource Leaks:<\/strong><\/p>\n\n\n\n Unreliable for Memory Management:<\/strong><\/p>\n\n\n\n Given these considerations, here are some best practices for managing finalizers:<\/p>\n\n\n\n Implement IDisposable for Resource Cleanu<\/strong>p – Instead of relying solely on finalizers, implement the Use Finalizers Sparingly<\/strong> – Limit the use of finalizers to scenarios where explicit resource cleanup is necessary, and alternatives are not feasible. Finalizers should be a last resort rather than a primary mechanism for resource management.<\/p>\n\n\n\n Consider SafeHandle or CriticalFinalizerObject<\/strong> – If your class needs to deal with unmanaged resources, consider using the Disposing of unused objects in C# is crucial for efficient memory management and preventing resource leaks. Imagine you have a program that reads data from a file. To access the file, you create a Without Disposing:<\/strong><\/p>\n\n\n\nHashSet<\/code> and a
List<\/code>. These operations included adding elements, checking for element existence, and removing elements. What we found was both intriguing and insightful. As the dataset size grew, the performance of the
List<\/code> started to lag behind significantly. Each operation took longer to complete, and the time it took increased exponentially with the size of the dataset. On the other hand, the
HashSet<\/code> remained steadfast and consistent, delivering comparable performance across all operations, regardless of dataset size.<\/p>\n\n\n\n
<\/figure>\n\n\n\n
using Newtonsoft.Json;\nusing System;\nusing System.Collections.Generic;\nusing System.Diagnostics;\nusing System.IO;\nusing System.Linq;\n\nnamespace ConsoleApp1\n{\n internal static class Program\n {\n private static List<object> results = new List<object>();\n\n public static void Main(string[] args)\n {\n \/\/ Number of elements for benchmarking\n int[] dataSize = new int[] { 10, 100, 1000, 5000, 10000, 15000, 30000, 50000, 75000, 100000, 100000 };\n \n foreach (int count in dataSize)\n {\n BenchmarkAdd(count);\n BenchmarkContains(count);\n BenchmarkRemove(count);\n }\n\n File.WriteAllText(\"results.json\", JsonConvert.SerializeObject(results));\n }\n\n private static void BenchmarkAdd(int count)\n {\n Stopwatch sw = Stopwatch.StartNew();\n HashSet<int> hashSet = new HashSet<int>();\n \n for (int i = 0; i < count; i++)\n {\n hashSet.Add(i);\n }\n \n sw.Stop();\n results.Add(new\n {\n type = \"hashset\",\n benchmark = \"add\",\n samples = count,\n time = sw.ElapsedMilliseconds\n });\n \n Console.WriteLine(\"HashSet Add: {0} ms\", sw.ElapsedMilliseconds);\n \n sw.Restart();\n List<int> list = new List<int>();\n \n for (int i = 0; i < count; i++)\n {\n list.Add(i);\n }\n \n sw.Stop();\n results.Add(new\n {\n type = \"list\",\n benchmark = \"add\",\n samples = count,\n time = sw.ElapsedMilliseconds\n });\n \n Console.WriteLine(\"List Add: {0} ms\", sw.ElapsedMilliseconds);\n }\n\n private static void BenchmarkContains(int count)\n {\n List<int> list = Enumerable.Range(0, count).ToList();\n HashSet<int> hashSet = new HashSet<int>(list);\n \n \/\/ search for value in hashset\n Stopwatch sw = Stopwatch.StartNew();\n \n for (int i = 0; i < count; i++)\n {\n hashSet.Contains(i);\n }\n \n sw.Stop();\n results.Add(new\n {\n type = \"hashset\",\n benchmark = \"contains\",\n samples = count,\n time = sw.ElapsedMilliseconds\n });\n \n Console.WriteLine(\"HashSet Contains: {0} ms\", sw.ElapsedMilliseconds);\n \n sw.Restart();\n \n \/\/ search for value in list\n for (int i = 0; i < count; i++)\n {\n list.Contains(i);\n }\n \n sw.Stop();\n results.Add(new\n {\n type = \"list\",\n benchmark = \"contains\",\n samples = count,\n time = sw.ElapsedMilliseconds\n });\n \n Console.WriteLine(\"List Contains: {0} ms\", sw.ElapsedMilliseconds);\n }\n\n private static void BenchmarkRemove(int count)\n {\n List<int> list = Enumerable.Range(0, count).ToList();\n HashSet<int> hashSet = new HashSet<int>(list);\n Stopwatch sw = Stopwatch.StartNew();\n \n for (int i = 0; i < count; i++)\n {\n hashSet.Remove(i);\n }\n \n sw.Stop();\n results.Add(new\n {\n type = \"hashset\",\n benchmark = \"remove\",\n samples = count,\n time = sw.ElapsedMilliseconds\n });\n \n Console.WriteLine(\"HashSet Remove: {0} ms\", sw.ElapsedMilliseconds);\n \n sw.Restart();\n \n for (int i = 0; i < count; i++)\n {\n list.Remove(i);\n }\n \n sw.Stop();\n results.Add(new\n {\n type = \"list\",\n benchmark = \"remove\",\n samples = count,\n time = sw.ElapsedMilliseconds\n });\n \n Console.WriteLine(\"List Remove: {0} ms\", sw.ElapsedMilliseconds);\n }\n }\n}<\/code><\/pre>\n\n\n\n
Suboptimal Approach:<\/h3>\n\n\n\n
List<int> numbers = new List<int>(); \/\/ Using a List to store numbers\n\n\/\/ Adding numbers to the list\nnumbers.Add(10);\nnumbers.Add(20);\nnumbers.Add(30);\n\n\/\/ Performing search operation\nint index = numbers.IndexOf(20); \/\/ Search for an element in the list\n<\/code><\/pre>\n\n\n\n
List<\/code> provides dynamic resizing and various useful methods, it might not be the most efficient for frequent search operations. The
IndexOf <\/code>method performs a linear search, resulting in O(n) time complexity.<\/p>\n\n\n\n
Optimized Approach:<\/h3>\n\n\n\n
using System;\nusing System.Collections.Generic;\n\nclass Program\n{\n static void Main()\n {\n \/\/ Optimized code using HashSet<int> for faster search\n\n \/\/ Using a HashSet<int> instead of List<int>\n HashSet<int> numbers = new HashSet<int>();\n\n \/\/ Adding numbers to the HashSet\n numbers.Add(10);\n numbers.Add(20);\n numbers.Add(30);\n\n \/\/ Performing search operation\n bool contains20 = numbers.Contains(20);\n\n }\n}<\/code><\/pre>\n\n\n\n
HashSet<\/code> for this scenario improves the search operation efficiency to O(1) on average, offering faster lookups for large collections due to its hashing-based data structure.<\/p>\n\n\n\n
Lookup Operations in Dictionary<\/h2>\n\n\n\n
TryGetValue<\/code> method of the dictionary.<\/p>\n\n\n\n
TryGetValue<\/code> method in C# is preferred over directly accessing a key in a dictionary because of its safety and performance benefits.<\/p>\n\n\n\n
\n
myDictionary[key]<\/code>), if the key does not exist in the dictionary, it will throw a
KeyNotFoundException<\/code> exception. On the other hand,
TryGetValue<\/code> method returns a boolean indicating whether the key exists in the dictionary or not, and if it does, it also retrieves the corresponding value. This prevents your code from throwing exceptions, making it safer and more robust.<\/li>\n\n\n\n
TryGetValue<\/code> method provides better performance compared to directly accessing a key, especially when dealing with large dictionaries or when you’re uncertain about whether a key exists. Directly accessing a key requires the dictionary to compute the hash of the key and perform a lookup operation, which can be expensive, especially if the key doesn’t exist.
TryGetValue<\/code>, however, performs these operations more efficiently, avoiding redundant computations and improving overall performance.<\/li>\n<\/ol>\n\n\n\n
TryGetValue<\/code>:<\/p>\n\n\n\n
Dictionary<string, int> myDictionary = new Dictionary<string, int>();\n\n\/\/ Directly accessing a key (not recommended)\ntry\n{\n int value = myDictionary[\"key\"];\n Console.WriteLine(\"Value: \" + value);\n}\ncatch (KeyNotFoundException)\n{\n Console.WriteLine(\"Key not found.\");\n}\n\n\/\/ Using TryGetValue (recommended)\nint result;\nif (myDictionary.TryGetValue(\"key\", out result))\n{\n Console.WriteLine(\"Value: \" + result);\n}\nelse\n{\n Console.WriteLine(\"Key not found.\");\n}<\/code><\/pre>\n\n\n\n
TryGetValue<\/code> is safer and more efficient, as it handles the case where the key doesn’t exist without throwing an exception and provides better performance, especially for large dictionaries.<\/p>\n\n\n\n
Avoid general purpose collections when there is a use case specific collection<\/h2>\n\n\n\n
List<int> queue = new List<int>(); \/\/ Using a List as a queue\n\n\/\/ Enqueue elements\nqueue.Add(5);\nqueue.Add(10);\n\n\/\/ Dequeue operation\nint firstElement = queue[0];\nqueue.RemoveAt(0);\n<\/code><\/pre>\n\n\n\n
List<\/code> for a queue might result in inefficient dequeue operations (
RemoveAt<\/code>) due to shifting elements. <\/p>\n\n\n\n
Optimized Approach:<\/h3>\n\n\n\n
Queue<int> queue = new Queue<int>(); \/\/ Using Queue for FIFO operations\n\n\/\/ Enqueue elements\nqueue.Enqueue(5);\nqueue.Enqueue(10);\n\n\/\/ Dequeue operation\nint firstElement = queue.Dequeue(); \/\/ Remove and retrieve the first element\n<\/code><\/pre>\n\n\n\n
List<\/code> as a queue is suboptimal and how
Queue<\/code> is the optimized approach:<\/p>\n\n\n\n
\n
List<\/code> is designed for general-purpose collections, allowing insertion and removal from anywhere. A
Queue<\/code> specifically focuses on FIFO (First-In-First-Out) operations, where elements are enqueued at the end and dequeued from the beginning.<\/li>\n\n\n\n
List<\/code> requires shifting all remaining elements down one position, leading to
O(n)<\/code> time complexity, where
n<\/code> is the number of elements. In contrast, a
Queue<\/code> uses internal pointers to directly access the first and last elements, achieving
O(1)<\/code> time complexity for both enqueue and dequeue operations.<\/li>\n\n\n\n
List<\/code> only stores the data values, while a
Queue<\/code> might have additional fields for managing pointers and internal state, potentially leading to slightly higher memory usage.<\/li>\n<\/ul>\n\n\n\n
\n
O(n)<\/code> complexity.<\/li>\n\n\n\n
List<\/code> directly, it can disrupt the queue’s expected behavior and lead to incorrect results.<\/li>\n\n\n\n
List<\/code> masquerading as a queue can be difficult due to the lack of explicit FIFO semantics.<\/li>\n<\/ul>\n\n\n\n
Queue<\/code>:<\/strong><\/p>\n\n\n\n
\n
Queue<\/code> class enforces FIFO behavior, preventing accidental modifications that could violate the queue’s order.<\/li>\n<\/ul>\n\n\n\n
\n
List<\/code> and
Queue<\/code> might be negligible.<\/li>\n\n\n\n
ConcurrentQueue<\/code> implementation.<\/li>\n<\/ul>\n\n\n\n
Queue<\/code> is the better choice for implementing FIFO operations due to its efficiency, clarity, and safety advantages. Avoid using a
List<\/code> as a queue unless you have specific reasons for prioritizing simplicity over optimal performance and behavior.<\/p>\n\n\n\n
Arrays vs. Lists<\/h2>\n\n\n\n
List<int> numbers = new List<int>(); \/\/ Initially empty\nfor (int i = 0; i < 10000; i++)\n{\n numbers.Add(i); \/\/ Dynamically adds elements, might trigger resizing\n}\n\n\/\/ Access elements, but might involve internal overhead\nint valueAt500 = numbers[500];<\/code><\/pre>\n\n\n\n
Optimized Approach:<\/h3>\n\n\n\n
int[] numbers = new int[10000]; \/\/ Allocate a fixed block of 10000 integers\nfor (int i = 0; i < numbers.Length; i++)\n{\n numbers[i] = i;\n}\n\n\/\/ Access elements directly, no additional memory overhead\nint valueAt500 = numbers[500];\n<\/code><\/pre>\n\n\n\n
\n
StringBuilder vs. String Concatenation<\/h2>\n\n\n\n
+<\/code> operator, which creates new string objects each time.<\/p>\n\n\n\n
using System;\n\nnamespace StringConcatenationDemo\n{\n class Program\n {\n static void Main(string[] args)\n {\n \/\/ Inefficient string concatenation using the + operator\n string combinedString = \"\";\n for (int i = 0; i < 100000; i++)\n {\n combinedString += i.ToString(); \/\/ Creates a new string object for each iteration\n }\n\n \/\/ Efficient string building using StringBuilder\n StringBuilder builder = new StringBuilder();\n for (int i = 0; i < 100000; i++)\n {\n builder.Append(i.ToString()); \/\/ Appends to existing buffer\n }\n\n \/\/ Creates the final string only once\n string finalString = builder.ToString(); \n }\n }\n}\n<\/code><\/pre>\n\n\n\n
\n
\n
combinedString<\/code> and the current
i.ToString()<\/code>.<\/li>\n\n\n\n
\n
StringBuilder<\/code> object maintains an internal buffer to efficiently handle string manipulations.<\/li>\n\n\n\n
Append()<\/code> method appends text to the existing buffer, avoiding the creation of new string objects for each iteration.<\/li>\n\n\n\n
ToString()<\/code> is called, reducing memory usage significantly.<\/li>\n<\/ul>\n<\/li>\n<\/ol>\n\n\n\n
\n
+<\/code> actually creates a new string object with the combined content.<\/li>\n\n\n\n
StringBuilder<\/code> is specifically designed for efficient string manipulation, avoiding unnecessary object creation and memory allocation.<\/li>\n<\/ul>\n\n\n\n
Structs vs. Classes<\/h2>\n\n\n\n
using System;\n\nnamespace StructVsClassDemo\n{\n \/\/ Class representation\n class PointAsClass\n {\n public int X { get; set; }\n public int Y { get; set; }\n }\n\n \/\/ Struct representation\n struct PointAsStruct\n {\n public int X;\n public int Y;\n }\n\n class Program\n {\n static void Main(string[] args)\n {\n \/\/ Create 100000 points using classes\n PointAsClass[] pointsAsClasses = new PointAsClass[100000];\n for (int i = 0; i < 100000; i++)\n {\n pointsAsClasses[i] = new PointAsClass() { X = i, Y = i * 2 };\n }\n\n \/\/ Create 100000 points using structs\n PointAsStruct[] pointsAsStructs = new PointAsStruct[100000];\n for (int i = 0; i < 100000; i++)\n {\n pointsAsStructs[i] = new PointAsStruct() { X = i, Y = i * 2 };\n }\n }\n }\n}\n<\/code><\/pre>\n\n\n\n
\n
\n
\n
\n
\n
\n
Avoid Global Variables<\/h2>\n\n\n\n
using System;\n\npublic class DataProcessor\n{\n private int globalValue; \/\/ Global variable replaced by an instance variable\n\n public DataProcessor(int initialValue)\n {\n this.globalValue = initialValue;\n }\n\n public void ProcessData()\n {\n \/\/ Perform data processing using the encapsulated value\n Console.WriteLine($\"Processing data with value: {globalValue}\");\n }\n\n \/\/ Other methods can use the encapsulated data as needed\n\n public void SetGlobalValue(int newValue)\n {\n \/\/ Setter method to modify the encapsulated value\n this.globalValue = newValue;\n }\n}\n\nclass Program\n{\n static void Main()\n {\n \/\/ Create an instance of DataProcessor\n DataProcessor dataProcessor = new DataProcessor(initialValue: 42);\n\n \/\/ Call methods on the instance, avoiding global variables\n dataProcessor.ProcessData();\n\n \/\/ Modify the encapsulated value using a setter method\n dataProcessor.SetGlobalValue(newValue: 99);\n\n \/\/ Call the processing method again with the updated value\n dataProcessor.ProcessData();\n }\n}<\/code><\/pre>\n\n\n\n
\n
DataProcessor<\/code> class encapsulates the global value within a private instance variable.<\/li>\n\n\n\n
ProcessData<\/code> method operates on the encapsulated data without relying on a global variable.<\/li>\n\n\n\n
SetGlobalValue<\/code> method provides a way to modify the encapsulated value.<\/li>\n<\/ul>\n\n\n\n
Optimize Large Data Processing<\/h2>\n\n\n\n
using System;\nusing System.Collections.Generic;\nusing System.IO;\n\npublic class LargeDataProcessor\n{\n \/\/ Process data in chunks to optimize memory usage\n public void ProcessLargeData(string filePath, int chunkSize)\n {\n using (FileStream fileStream = new FileStream(filePath, FileMode.Open, FileAccess.Read))\n using (StreamReader reader = new StreamReader(fileStream))\n {\n char[] buffer = new char[chunkSize];\n int bytesRead;\n\n do\n {\n bytesRead = reader.Read(buffer, 0, buffer.Length);\n\n \/\/ Process the current chunk of data\n ProcessChunk(buffer, bytesRead);\n } while (bytesRead == buffer.Length);\n }\n }\n\n \/\/ Method to process a chunk of data\n private void ProcessChunk(char[] dataChunk, int length)\n {\n \/\/ Perform processing on the current chunk of data\n Console.WriteLine($\"Processing chunk of size {length}: {new string(dataChunk, 0, length)}\");\n }\n}\n\nclass Program\n{\n static void Main()\n {\n \/\/ Example: Process a large file in chunks\n LargeDataProcessor dataProcessor = new LargeDataProcessor();\n\n \/\/ Specify the file path and the desired chunk size\n string filePath = \"large_data.txt\";\n int chunkSize = 1024; \/\/ Adjust the chunk size based on your requirements\n\n \/\/ Process the large data file in chunks\n dataProcessor.ProcessLargeData(filePath, chunkSize);\n }\n}<\/code><\/pre>\n\n\n\n
\n
LargeDataProcessor<\/code> class contains a method
ProcessLargeData<\/code> that reads the data from a file in chunks.<\/li>\n\n\n\n
ProcessChunk<\/code> method is called for each chunk of data read from the file, allowing you to process the data without loading the entire file into memory at once.<\/li>\n\n\n\n
FileStream<\/code> and
StreamReader<\/code>, and the data is processed in chunks specified by the
chunkSize<\/code> parameter.<\/li>\n<\/ul>\n\n\n\n
chunkSize<\/code> based on your specific requirements and available memory. This approach helps optimize memory usage when dealing with large datasets, as it avoids loading the entire dataset into memory, which can lead to increased memory consumption and potential performance issues.<\/p>\n\n\n\n
Limit the Use of Finalizers<\/strong><\/h2>\n\n\n\n
\n
\n
\n
\n
IDisposable<\/code> interface and calling
Dispose()<\/code> explicitly.<\/li>\n<\/ul>\n\n\n\n
IDisposable<\/code> interface for explicit resource cleanup. This allows you to use the
using<\/code> statement to ensure timely disposal of resources.<\/p>\n\n\n\n
public class MyClass : IDisposable\n {\n private bool disposed = false;\n\n public void Dispose()\n {\n Dispose(true);\n GC.SuppressFinalize(this);\n }\n\n protected virtual void Dispose(bool disposing)\n {\n if (!disposed)\n {\n if (disposing)\n {\n \/\/ Dispose of managed resources\n }\n\n \/\/ Dispose of unmanaged resources\n disposed = true;\n }\n }\n\n ~MyClass()\n {\n Dispose(false);\n }\n }\n\n<\/code><\/pre>\n\n\n\n
SafeHandle<\/code> class or deriving from
CriticalFinalizerObject<\/code> to ensure more reliable resource cleanup.<\/p>\n\n\n\n
Dispose of unused objects<\/strong><\/h2>\n\n\n\n
FileStream<\/code> object. This object holds a reference to the actual file and consumes system resources.<\/p>\n\n\n\n
\n
FileStream<\/code>.<\/li>\n\n\n\n