最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

c# - Dependency Injection Dealing with IDisposable - Stack Overflow

programmeradmin3浏览0评论

I am currently refactoring a legacy Job class to make it adapt to dependency injection pattern. However I have trouble managing the lifetime of IDisposable objects. It really get me stuck for a while, so I want to get some helps from the community.

This is how the original class looks like:

public class Job
{
    public void Execute()
    {
        /// Logic1, Logic2, DataProvider, DataProvider2 all have different dependencies
        /// Ignore to limit the scope of this question 
        /// first unit of work, taking ~5 minutes due to complex math calc and data size
        var first = new Logic1();
        var data1 = new DataProvider().Get();
        using (StreamWriter sw = new StreamWriter (new FileStream(new Config1().filePath, open)))
        {
            foreach (data in data1)
            {
                var result = first.Compute(data);
                sw.write(result);
            }
        }

        /// second unit of work, taking ~20 minutes due to complex math calc and data size
        var second = new Logic2();
        var data2 = new DataProvider2().Get();
        using (StreamWriter sw = new StreamWriter(new FileStream(new Config2().filePath, open)))
        {
            foreach (data in data2)
            {
                var result = second.Compute(data);
                sw.write(result);
            }
        }
    }
}

After my initial refactoring, it becomes:

public class Job
{
    private readonly IWork1 _work1;
    private readonly IWork2 _work2;

    public Job(IWork1 work1, IWork2 Work2)
    {
        _work1 = work1;
        _work2 = work2;
    }

    public void Execute()
    {
        _work1.DoWork();
        // work2 must happen after work1, cannot be in parallel
        _work2.DoWork2();
    }
}


internal class Work1 : IWork1
{
    private readonly ILogic1 _logic;
    private readonly IDataProvider _dataProvider;
    private readonly IWriter1 _writer;

    public Work1(ILogic1 logic, IDataProvider dataProvider, IWriter1 writer){
        _logic = logic;
        _dataProvider = dataProvider;
        _writer = writer;
    }

    internal void DoWork()
    {
        foreach (data in _dataProvider.Get())
        {
            var result = _logic.Compute(data);
            writer.write(result);
        }
    }
}

internal class Work2 : IWork2
{
    private readonly ILogic2 _logic;
    private readonly IDataProvider2 _dataProvider;
    private readonly IWriter2 _writer;

    public Work2(ILogic2 logic, IDataProvider2 dataProvider, IWriter2 writer){
        _logic = logic;
        _dataProvider = dataProvider;
        _writer = writer;
    }

    internal void DoWork2()
    {
        foreach (data in _dataProvider.Get())
        {
            var result = _logic.Compute(data);
            writer.write(result);
        }
    }
}

This looks good other than one problem. The Job is executed in a scope, and the IWriter1 and IWriter2 both have concrete implementaions Writer1 and Writer2 that implements IDisposable (to keep the file stream, otherwise streams are open & close 1 million times). This makes Writer1 not disposed until Work2 is finished, and thus holding the resource for unecessary 20 minutes.

Inspired by IDbContextFactory, I refactr the code further by introducing IWriterFactory:

internal class Work1 : IWork1
{
    private readonly ILogic1 _logic;
    private readonly IDataProvider _dataProvider;
    private readonly IWriterFactory _factory;

    public Work1(ILogic1 logic, IDataProvider dataProvider, IWriterFactory factory){
        _logic = logic;
        _dataProvider = dataProvider;
        _factory = factory;
    }

    internal void DoWork()
    {
        using (var writer = _factory.GetWriter1())
        foreach (data in _dataProvider.Get())
        {
            var result = _logic.Compute(data);
            writer.write(result);
        }
    }
}

internal class Work2 : IWork2
{
    private readonly ILogic2 _logic;
    private readonly IDataProvider2 _dataProvider;
    private readonly IWriterFactory _factory;

    public Work2(ILogic2 logic, IDataProvider2 dataProvider, IWriterFactory factory){
        _logic = logic;
        _dataProvider = dataProvider;
        _factory = factory;
    }

    internal void DoWork2()
    {
        using (var writer = _factory.GetWriter2())
        foreach (data in _dataProvider.Get())
        {
            var result = _logic.Compute(data);
            writer.write(result);
        }
    }
}

This has addressed the lifetime problem, however, it passes the ownership of IWriter1 and IWriter2 to Work1 amd Work2 from IWriterFactory. Now the consumer needs to explicitly manage the lifetime.

To avoid letting Work to control the lifetime of Writer, I change my code again to add seperate scopes for each Work, and it becomes:

public class Job
{
    private readonly IServiceProvider _services;

    public Job(IServiceProvider services)
    {
        _services = services;
    }

    public void Execute()
    {
        using (var scope1 = _services.CreateScope())
        {
            IWork1 work = scope1.ServiceProvider.GetRequired<IWork1>();
            work.DoWork();
        }

        using (var scope2 = _services.CreateScope())
        {
            IWork2 work = scope1.ServiceProvider.GetRequired<IWork2>();
            work.DoWork2();
        }
    }
}


internal class Work1 : IWork1
{
    private readonly ILogic1 _logic;
    private readonly IDataProvider _dataProvider;
    private readonly IWriter1 _writer;

    public Work1(ILogic1 logic, IDataProvider dataProvider, IWriter1 writer){
        _logic = logic;
        _dataProvider = dataProvider;
        _writer = writer;
    }

    internal void DoWork()
    {
        foreach (data in _dataProvider.Get())
        {
            var result = _logic.Compute(data);
            writer.write(result);
        }
    }
}

internal class Work2 : IWork2
{
    private readonly ILogic2 _logic;
    private readonly IDataProvider2 _dataProvider;
    private readonly IWriter2 _writer;

    public Work2(ILogic2 logic, IDataProvider2 dataProvider, IWriter2 writer){
        _logic = logic;
        _dataProvider = dataProvider;
        _writer = writer;
    }

    internal void DoWork2()
    {
        foreach (data in _dataProvider.Get())
        {
            var result = _logic.Compute(data);
            writer.write(result);
        }
    }
}

Now Writer can be any kind of writers and Work does not need to know it uses a IDisposable and hence has to dispose of it explicitly. However, this is obviously a ServiceLocator pattern, so I don't know if this is an ideal approach.

My question is, if the requirement is disposing of Writer1 and Writer2 whenever they are no longer needed, which approach is better between factory approach and service locator approach. Is there any better solution?

Update Job use case:

The Job class is in a seperate library consumed by different app hosting. It is provided with a extension:

public static class JobDI
{
    public static void AddJob(this IServiceCollection services)
    {
        // register other internal services
        services.AddScoped<Job>();
    }
}

and in each app's Program.cs:

public class Program
{
    public static async Task Main(string[] arg)
    {
        var builder = WebApplication.CreateBuilder(args);
        builder.Services.AddJob();

        /// other codes
    }
}

I am currently refactoring a legacy Job class to make it adapt to dependency injection pattern. However I have trouble managing the lifetime of IDisposable objects. It really get me stuck for a while, so I want to get some helps from the community.

This is how the original class looks like:

public class Job
{
    public void Execute()
    {
        /// Logic1, Logic2, DataProvider, DataProvider2 all have different dependencies
        /// Ignore to limit the scope of this question 
        /// first unit of work, taking ~5 minutes due to complex math calc and data size
        var first = new Logic1();
        var data1 = new DataProvider().Get();
        using (StreamWriter sw = new StreamWriter (new FileStream(new Config1().filePath, open)))
        {
            foreach (data in data1)
            {
                var result = first.Compute(data);
                sw.write(result);
            }
        }

        /// second unit of work, taking ~20 minutes due to complex math calc and data size
        var second = new Logic2();
        var data2 = new DataProvider2().Get();
        using (StreamWriter sw = new StreamWriter(new FileStream(new Config2().filePath, open)))
        {
            foreach (data in data2)
            {
                var result = second.Compute(data);
                sw.write(result);
            }
        }
    }
}

After my initial refactoring, it becomes:

public class Job
{
    private readonly IWork1 _work1;
    private readonly IWork2 _work2;

    public Job(IWork1 work1, IWork2 Work2)
    {
        _work1 = work1;
        _work2 = work2;
    }

    public void Execute()
    {
        _work1.DoWork();
        // work2 must happen after work1, cannot be in parallel
        _work2.DoWork2();
    }
}


internal class Work1 : IWork1
{
    private readonly ILogic1 _logic;
    private readonly IDataProvider _dataProvider;
    private readonly IWriter1 _writer;

    public Work1(ILogic1 logic, IDataProvider dataProvider, IWriter1 writer){
        _logic = logic;
        _dataProvider = dataProvider;
        _writer = writer;
    }

    internal void DoWork()
    {
        foreach (data in _dataProvider.Get())
        {
            var result = _logic.Compute(data);
            writer.write(result);
        }
    }
}

internal class Work2 : IWork2
{
    private readonly ILogic2 _logic;
    private readonly IDataProvider2 _dataProvider;
    private readonly IWriter2 _writer;

    public Work2(ILogic2 logic, IDataProvider2 dataProvider, IWriter2 writer){
        _logic = logic;
        _dataProvider = dataProvider;
        _writer = writer;
    }

    internal void DoWork2()
    {
        foreach (data in _dataProvider.Get())
        {
            var result = _logic.Compute(data);
            writer.write(result);
        }
    }
}

This looks good other than one problem. The Job is executed in a scope, and the IWriter1 and IWriter2 both have concrete implementaions Writer1 and Writer2 that implements IDisposable (to keep the file stream, otherwise streams are open & close 1 million times). This makes Writer1 not disposed until Work2 is finished, and thus holding the resource for unecessary 20 minutes.

Inspired by IDbContextFactory, I refactr the code further by introducing IWriterFactory:

internal class Work1 : IWork1
{
    private readonly ILogic1 _logic;
    private readonly IDataProvider _dataProvider;
    private readonly IWriterFactory _factory;

    public Work1(ILogic1 logic, IDataProvider dataProvider, IWriterFactory factory){
        _logic = logic;
        _dataProvider = dataProvider;
        _factory = factory;
    }

    internal void DoWork()
    {
        using (var writer = _factory.GetWriter1())
        foreach (data in _dataProvider.Get())
        {
            var result = _logic.Compute(data);
            writer.write(result);
        }
    }
}

internal class Work2 : IWork2
{
    private readonly ILogic2 _logic;
    private readonly IDataProvider2 _dataProvider;
    private readonly IWriterFactory _factory;

    public Work2(ILogic2 logic, IDataProvider2 dataProvider, IWriterFactory factory){
        _logic = logic;
        _dataProvider = dataProvider;
        _factory = factory;
    }

    internal void DoWork2()
    {
        using (var writer = _factory.GetWriter2())
        foreach (data in _dataProvider.Get())
        {
            var result = _logic.Compute(data);
            writer.write(result);
        }
    }
}

This has addressed the lifetime problem, however, it passes the ownership of IWriter1 and IWriter2 to Work1 amd Work2 from IWriterFactory. Now the consumer needs to explicitly manage the lifetime.

To avoid letting Work to control the lifetime of Writer, I change my code again to add seperate scopes for each Work, and it becomes:

public class Job
{
    private readonly IServiceProvider _services;

    public Job(IServiceProvider services)
    {
        _services = services;
    }

    public void Execute()
    {
        using (var scope1 = _services.CreateScope())
        {
            IWork1 work = scope1.ServiceProvider.GetRequired<IWork1>();
            work.DoWork();
        }

        using (var scope2 = _services.CreateScope())
        {
            IWork2 work = scope1.ServiceProvider.GetRequired<IWork2>();
            work.DoWork2();
        }
    }
}


internal class Work1 : IWork1
{
    private readonly ILogic1 _logic;
    private readonly IDataProvider _dataProvider;
    private readonly IWriter1 _writer;

    public Work1(ILogic1 logic, IDataProvider dataProvider, IWriter1 writer){
        _logic = logic;
        _dataProvider = dataProvider;
        _writer = writer;
    }

    internal void DoWork()
    {
        foreach (data in _dataProvider.Get())
        {
            var result = _logic.Compute(data);
            writer.write(result);
        }
    }
}

internal class Work2 : IWork2
{
    private readonly ILogic2 _logic;
    private readonly IDataProvider2 _dataProvider;
    private readonly IWriter2 _writer;

    public Work2(ILogic2 logic, IDataProvider2 dataProvider, IWriter2 writer){
        _logic = logic;
        _dataProvider = dataProvider;
        _writer = writer;
    }

    internal void DoWork2()
    {
        foreach (data in _dataProvider.Get())
        {
            var result = _logic.Compute(data);
            writer.write(result);
        }
    }
}

Now Writer can be any kind of writers and Work does not need to know it uses a IDisposable and hence has to dispose of it explicitly. However, this is obviously a ServiceLocator pattern, so I don't know if this is an ideal approach.

My question is, if the requirement is disposing of Writer1 and Writer2 whenever they are no longer needed, which approach is better between factory approach and service locator approach. Is there any better solution?

Update Job use case:

The Job class is in a seperate library consumed by different app hosting. It is provided with a extension:

public static class JobDI
{
    public static void AddJob(this IServiceCollection services)
    {
        // register other internal services
        services.AddScoped<Job>();
    }
}

and in each app's Program.cs:

public class Program
{
    public static async Task Main(string[] arg)
    {
        var builder = WebApplication.CreateBuilder(args);
        builder.Services.AddJob();

        /// other codes
    }
}
Share Improve this question edited Mar 19 at 13:15 Steven 173k25 gold badges351 silver badges451 bronze badges asked Mar 19 at 2:40 BigHeadBangBangBigHeadBangBang 354 bronze badges
Add a comment  | 

2 Answers 2

Reset to default 3

This makes Writer1 not disposed until Work2 is finished, and thus holding the resource for unecessary 20 minutes.

Does this actually cause a significant impact on the system. StreamReaders don't typically keep much data in memory. They just forward it to the underlying stream. There might also be a typo, because I don't think that a StreamReader can write.

this is obviously a ServiceLocator pattern, so I don't know if this is an ideal approach.

As long as Job is part of your Composition Root you are free to use the IServiceLocator, and there is no risk in applying the Service Locator anti-pattern.

The IWriterFactory version is just fine. You say it has a problem:

This has addressed the lifetime problem, however, it passes the ownership of IWriter1 and IWriter2 to Work1 amd Work2 from IWriterFactory. Now the consumer needs to explicitly manage the lifetime.

This is not a problem at all, though. Work1 and Work2 are not the stream consumers. They are the stream creators, since they called the factory, and they are ideally responsible for determining the stream lifetimes.

Also, I notice that the original version is much simpler than any of the refactors. Don't pull out the logic in to separate Work classes unless you have some other good reason for doing so.

发布评论

评论列表(0)

  1. 暂无评论