Monday, October 22, 2012

Dependency Injection: Inversion of Control

Inversion of Control

The first thing to consider when planning an advanced architecture is support of extensions. Many platforms support tools and other extras, as independently compiled components. In general this is only possible when the references are soft coded.

Architecture Overview

To build an inversion of control framework from the ground up, we`ll focus on three modules - The main application, A library with helper utilities, and a component that we`ll use in the application without referencing it directly during the compilation.
We will use the standard app.config file to soft code the information about the component, divided in two parts - the assembly name that we need to build the path to the DLL file and the class name for instantiating the component at run-time using reflection. This is part of the first module - the application.

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
<appSettings>
<add key="IComponentModule" value="SampleComponent.dll" />
<add key="IComponentClass" value="SampleComponent.SampleComponent" />
</appSettings>
</configuration>

And a very simplified interface of the component. This is part of the common library module:

public interface IComponent
{
string GetResult();
}

Basic Dependency Injection Framework

First we`ll implement the framework that handles the dependency injection functionality, and next we`ll build the whole sample application. Basically we need a couple of methods in a class Tools (that could be a singleton). This class is supposed to be in the common library module as well:

The first method is a generic approach to read values from a standard App.config file:

private static T GetCfgValue<T>(string key)
{
try
{
var value = ConfigurationManager.AppSettings[key];
return (T)Convert.ChangeType(value, typeof(T));
}
catch
{
return default(T);
}
}

The next part is loading the assembly:

private static Assembly GetModuleByType(Type interfaceType)
{
string toolName = interfaceType.Name;
string assemblyPath = Assembly.GetExecutingAssembly().Location;

string dir = Path.GetDirectoryName(assemblyPath);
string file = GetCfgValue<string>(toolName + "Module");
string modulePath = Path.Combine(dir, file);
return Assembly.LoadFile(modulePath);
}

And getting the component type:

private static Type GetType<T>()
{
string toolName = typeof(T).Name;
string CfgTypeKey = toolName + "Class";
var module = GetModuleByType(typeof(T));
return module.GetType(GetCfgValue<string>(CfgTypeKey));
}

And the last thing is the only public method in our Tools singleton, that provides the API for instancing components dynamically, by a given interface:

public static T GetInstance<T>()
{
var type = GetType<T>();
return (T)Activator.CreateInstance(type);
}

The Independently Compiled Component

So far we have the app.config file from the application, and the common library module that contains the interface of the component and the basic inversion of control framework. Now we`ll implement the component with an API specified in the interface, which has a single method for brevity.

public class SampleComponent : IComponent
{
public string GetResult()
{
return "This is a sample result";
}
}

Putting it all together

At this point we have everything we need to instantiate the component, from the main application that has no reference to it during the compilation at all. However creating an instance of that class is as easy as:

class Program
{
static void Main(string[] args)
{
var instance = Tools.GetInstance<IComponent>();
Console.WriteLine(instance.GetResult());
}
}

Once we have a framework for instancing classes from different modules by a given interface, we can easily add a setup utility to our application to allow switching between different implementations of different application plugins.

The next steps

From this point we can look at even higher level of abstraction - loading modules from the cloud, to allow updates which do not require patching every installation of the software and more benefits in general. We will cover this topic later too.

Monday, October 15, 2012

Functional Decomposition: Dynamic Task Execution Ordering, Based on Dependencies

Task Execution Synchronization

Even processes that spawn thousands of threads start from a single entry point that handles the preparations for parallel execution. If ordering is sometimes mandatory for sequential execution, it has even higher level of importance when parallelism is to be used.

To achieve synchronization when dividing a task in multiple sub-tasks using functional decomposition we must know exactly how these sub-tasks are related to each other - in other words to build the dependency chain. Basically independent tasks should always be completed first, and then the other tasks that rely on them to complete some initial work.

In advanced situations hard-coding this dependency chain is not a maintainable solution, and in some complex cases having a static description of the relations between the tasks is impossible.

For maximum maintainability the only thing we would want to set for each sub-task is the list other sub-tasks it depends on, and implement a high-level framework to arrange the execution order.

Defining the sub-tasks (Example)

Consider a scenario with four operations A, B, C and D:
A is an independent routine
B relies on the completion of C
C needs the result from A
D cannot start before B and C have finished.

We`ll define a worker class for each of them, and set the dependencies using attributes:

class WorkerA : Worker { }

[DependsOn(typeof(WorkerC))]
class WorkerB : Worker { }

[DependsOn(typeof(WorkerA))]
class WorkerC : Worker { }

[DependsOn(typeof(WorkerC), typeof(WorkerB))]
class WorkerD : Worker { }

The attribute in this case is defined as:

class DependsOnAttribute : Attribute
{
    public Type[] Dependencies { get; set; }

    public DependsOnAttribute(params Type[] t)
    {
        Dependencies = t;
    }
}

In a real scenario those would inherit from a base worker class a virtual method for loading the data each derived class needs to work with and a method to start the task.

Implementing the routines

Having an easy way to make changes to the dependency chain, in which every worker class knows its dependencies, we need some basic logic to dynamically build the order of these operations at runtime. To implement the priority manager class, the first thing we need is a method to return the dependencies by a given worker class:

private static IEnumerable<Type> GetDependenciesOf(Type t, IEnumerable<Type> remaining)
{
    var dependsOn = t.GetCustomAttributes(typeof(DependsOnAttribute), false);

    var types = from attr in dependsOn.Cast<DependsOnAttribute>()
                from d in attr.Dependencies
                where remaining.Contains(d) & (d != t)
                select d;
    return types;
}

To decide where to start from we need a method to return a random (the first) worker that is not dependent on any other workers from a given list (the pending tasks):

private static Type GetFirstIndependent(IEnumerable<Type> remaining)
{
    var first = (from t in remaining
                 let d = GetDependenciesOf(t, remaining)
                 where d.FirstOrDefault() == null
                 select t).FirstOrDefault();
    return first;
}

At this point we have everything we need to build the list of tasks arranged dynamically by priority:

public static IEnumerable<Type> BuildPriorityList(IEnumerable<Type> types)
{
    List<Type> remaining = new List<Type>(types);
    while (remaining.Count > 0)
    {
        Type t = GetFirstIndependent(remaining);
        remaining.Remove(t);
        yield return t;
    }
}

Putting it all together, we can get the ordered list of tasks by passing all worker types in a random order to the BuildPriorityList method.

The next steps

In general there are two main directions for improvements from here:
- Parallel execution built on the top of the priority list
- Soft coded worker definitions and common compiled logic
We`ll cover both topics in the near future, and build a command framework that processes operational soft-coded sequences, that could be defined in XML.

Sunday, October 14, 2012

Welcome

Hello World!

I truly believe that there will be incomparably more interesting posts here, so I`ll keep this one simple.
Recently I decided to start sharing my thoughts related to Software development methodologies,
and the result from it will be on the same this website in the near future.
Check my Google Plus profile for more details about myself.

Stay Tuned World ...