I have developed a .NET Core (2.2) API in Visual Studio 2019. There are 2 custom nuget packages viz Auth and Log which have been created for Authentication and logging purpose respectively.
The Auth package internally references the Log package. I have used Dependency injection in the Configure services method of the
Startup.cs class(services.AddScope(XXX)).
When the service was deployed on OpenShift, we find an extensive memory leak where the memory keeps building exponentially and reaches the limit, thereby restarting the pods.
Following approaches/solutions have been tries so far
Increasing the memory limit in OpenShift
Disabling the Server GarbageCollection by setting as false
Upgrading to .NET framework3.1
ImplementingIDisposable and Disposing the objects created in the respective controller calls
Forcing a GCCollect
AddSingleton instead of AddScoped
Used Visual Studio Performance profiler to validate if GC is being kicked in. GC is invoked on local debug environment.
But instead of consuming the packages, when the class files of Auth and Log are included directly in the API solution and called , there is no memory leak. The purpose of building them as packages and using dependency injection gets defeated.
What is the best approach to consume custom packages in Dependency Injection to the calling API and Is there a way to effectively dispose objects created through DI without attributing to memory leak?