LocalSolver logo
is now
Hexaly logo

We're excited to share that we are moving forward. We're leaving behind the LocalSolver brand and transitioning to our new identity: Hexaly. This represents a leap forward in our mission to enable every organization to make better decisions faster when faced with operational and strategic challenges.

  • Docs »
  • Example tour »
  • Flexible Resource Constrained Project Scheduling Problem

Flexible Resource Constrained Project Scheduling Problem¶

Principles learned¶

  • Add multiple set decision variables

  • Use the find, contains and partition operators

  • Use of interval decision variables

  • Set precedence constraints

  • Use of a nesting of two lambda expressions to write a constraint

Problem¶

../_images/flexible_cumulative.svg

In the flexible cumulative problem, a project consists of a set of tasks that have to be scheduled. We have a finite number of renewable resources. Each task has to be done by one and only one resource and there can be several resources that can carry out this activity. Each task has a given duration and weight (both possibly equal to zero) on each resource and cannot be interrupted. The weight represents the amount of the resource it consumes while the task is being processed. If both the duration and the weight of the task for a given resource are zero, then the corresponding task cannot be performed by this resource. There are precedence constraints between the tasks: each task must end before any of its successors can start. Each resource has a given maximum capacity: it can process several tasks at once, but the sum of the processed tasks’s weights can never exceed this maximum capacity.

The goal is to find a schedule that minimizes the makespan: the time when all tasks have been processed.

Download the example


Data¶

The format of the data files is as follows:

  • First line:

    • Number of tasks

    • Number of renewable resources

  • Second line: Maximum capacity for each resource

  • From the third line, for each task * for each resource

    • Duration of the task (taskProcessingTimeData)

    • Resource requirements (weights)

  • From the next line, for each task:

    • Number of successors

    • Task ID for each successor

Program¶

We use set variables to model the set of tasks done by the resource.

Each task must be processed, hence the partition() operator on the sets, which ensures that each task will belong to one and only one resource. Resources that are not compatible for an operation are filtered out using a contains operator.

The find operator takes as argument an array of sets and an integer value, and returns the position of the set containing the value in the array, if it exists. Here, we use this operator to retrieve the id of the resource used for each task. It then allows to deduce the duration of the operation, since it depends on the selected resource.

We use interval variables to model the start and end times of the tasks which have to respect the duration of the task according to the resource that process it.

The precedence constraints are easily written: each task must end before any of its successors can start.

The cumulative resource constraints can be formulated as follows: for each resource r, and for each time slot t, the amount of resource consumed by the tasks that are being processed must not exceed the resource’s capacity.

To model these constraints, we sum up, for each resource r, the weights of all the tasks it processes at time slot t.

The makespan to minimize is the time when all the tasks have been processed.

Execution:
hexaly flexible_cumulative.hxm inFileName=instances/pat1.fc [outFileName=] [hxTimeLimit=]
use io;

/* Read instance data.*/
function input() {
    local usage = "Usage: hexaly flexible_cumulative.hxm inFileName=instanceFile "
            + "[outFileName=outputFile] [hxTimeLimit=timeLimit]";
    if (inFileName == nil) throw usage;

    inFile = io.openRead(inFileName);
    // Number of tasks
    nbTasks = inFile.readInt();
    // Number of resources
    nbResources = inFile.readInt();

    // Maximum capacity of each resource
    capacity[r in 0...nbResources] = inFile.readInt();
    for [i in 0...nbTasks][r in 0...nbResources] {
        // Duration of task i if task i is done by resource r
        taskProcessingTime[i][r] = inFile.readInt();
        // Resource weight of resource r required for task i
        weight[r][i] = inFile.readInt(); 
    }

    for [i in 0...nbTasks] {
        // The number of successors of task i
        nbSuccessors[i] = inFile.readInt();
        // Successors of each task i
        for [s in 0...nbSuccessors[i]] {
            successors[i][s] = inFile.readInt();
        } 
    } 

    // Trivial upper bound for the start times of the tasks
    horizon = sum[i in 0...nbTasks](max[r in 0...nbResources](taskProcessingTime[i][r]));
    
    inFile.close();
}


function model() {
    // Set of tasks done by each resource
    resourcesTasks[r in 0...nbResources] <- set(nbTasks);

    // Only compatible resources can be selected for a task
    for [i in 0...nbTasks][r in 0...nbResources] {
        if (taskProcessingTime[i][r] == 0 && weight[r][i] == 0)
            constraint contains(resourcesTasks[r], i) == 0;
    }

    // All tasks are scheduled on the resources
    constraint partition[r in 0...nbResources](resourcesTasks[r]);

    // For each task, the selected resource
    taskResource[i in 0...nbTasks] <- find(resourcesTasks, i);

    // Interval decisions: time range of each task
    tasks[i in 0...nbTasks] <- interval(0, horizon);

    // Task duration constraints
    for [i in 0...nbTasks] {
        constraint length(tasks[i]) == taskProcessingTime[i][taskResource[i]];
    }

    // Precedence constraints between the tasks
    for [i in 0...nbTasks][s in 0...nbSuccessors[i]] {
        constraint tasks[i] < tasks[successors[i][s]];
    }

    // Makespan: end of the last task
    makespan <- max[i in 0...nbTasks](end(tasks[i]));

    // Cumulative resource constraints
    for [r in 0...nbResources] {
        constraint and(0...makespan, t => 
                sum(resourcesTasks[r], i => weight[r][i] * contains(tasks[i], t)) <= capacity[r]);
    }

    // Minimize the makespan
    minimize makespan;
}

/* Parameterize the solver */
function param() {
    if (hxTimeLimit == nil) hxTimeLimit = 60;
}

/* Write the solution in a file with the following format:
 *  - total makespan
 *  - for each task, the task id, the selected resource, the start and end times */
function output() {
    if (outFileName != nil) {
        outFile = io.openWrite(outFileName);
        println("Solution written in file ", outFileName);
        outFile.println(makespan.value);
        for [i in 0...nbTasks] {
            outFile.println(i, " ", taskResource[i].value, " ", tasks[i].value.start, " ", tasks[i].value.end);
        }
    }
}
Execution (Windows)
set PYTHONPATH=%HX_HOME%\bin\python
python flexible_cumulative.py instances\pat1.fc
Execution (Linux)
export PYTHONPATH=/opt/hexaly_13_0/bin/python
python flexible_cumulative.py instances/pat1.fc
import hexaly.optimizer
import sys


def read_instance(filename):
    with open(filename) as f:
        lines = f.readlines()

    first_line = lines[0].split()

    # Number of tasks
    nb_tasks = int(first_line[0])

    # Number of resources
    nb_resources = int(first_line[1])

    # Maximum capacity of each resource
    capacity = [int(lines[1].split()[r]) for r in range(nb_resources)]

    # Duration of task i if task i is done by resource r
    task_processing_time_data = [[] for i in range(nb_tasks)]

    # Resource weight of resource r required for task i
    weight = [[] for r in range(nb_resources)]

    # Number of successors
    nb_successors = [0 for i in range(nb_tasks)]

    # Successors of each task i
    successors = [[] for i in range(nb_tasks)]

    for i in range(nb_tasks):
        line_d_w = lines[i + 2].split()
        for r in range(nb_resources):
            task_processing_time_data[i].append(int(line_d_w[2 * r]))
            weight[r].append(int(line_d_w[2 * r + 1]))

        line_succ = lines[i + 2 + nb_tasks].split()
        nb_successors[i] = int(line_succ[0])
        successors[i] = [int(elm) for elm in line_succ[1::]]

    # Trivial upper bound for the start times of the tasks
    horizon = sum(max(task_processing_time_data[i][r] for r in range(nb_resources)) for i in range(nb_tasks))

    return (nb_tasks, nb_resources, capacity, task_processing_time_data, weight, nb_successors, successors, horizon)


def main(instance_file, output_file, time_limit):
    nb_tasks, nb_resources, capacity, task_processing_time_data, weight,\
        nb_successors, successors, horizon = read_instance(instance_file)

    with hexaly.optimizer.HexalyOptimizer() as optimizer:
        #
        # Declare the optimization model
        #
        model = optimizer.model

        # Set of tasks done by each resource
        resources_tasks = [model.set(nb_tasks) for r in range(nb_resources)]
        resources = model.array(resources_tasks)

        # Only compatible resources can be selected for a task
        for i in range(nb_tasks):
            for r in range(nb_resources):
                if task_processing_time_data[i][r] == 0 and weight[r][i] == 0:
                    model.constraint(model.contains(resources_tasks[r], i) == 0)

        # For each task, the selected resource
        task_resource = [model.find(resources, t) for t in range(nb_tasks)]

        # All tasks are scheduled on the resources
        model.constraint(model.partition(resources))

        # Interval decisions: time range of each task
        tasks = [model.interval(0, horizon) for i in range(nb_tasks)]

        # Create Hexaly arrays to be able to access them with an "at" operator
        tasks_array = model.array(tasks)
        task_processing_time = model.array(task_processing_time_data)
        weight_array = model.array(weight)

        # Task duration constraints
        for i in range(nb_tasks):
            model.constraint(model.length(tasks[i]) == task_processing_time[i][task_resource[i]])

        # Precedence constraints between the tasks
        for i in range(nb_tasks):
            for s in range(nb_successors[i]):
                model.constraint(tasks[i] < tasks[successors[i][s]])

        # Makespan: end of the last task
        makespan = model.max([model.end(tasks[i]) for i in range(nb_tasks)])

        # Cumulative resource constraints
        for r in range(nb_resources):
            capacity_respected = model.lambda_function(
                lambda t: model.sum(resources_tasks[r], model.lambda_function(
                    lambda i: model.at(weight_array, r, i) * model.contains(tasks_array[i], t)))
                <= capacity[r])
            model.constraint(model.and_(model.range(makespan), capacity_respected))

        # Minimize the makespan
        model.minimize(makespan)

        model.close()

        # Parameterize the optimizer
        optimizer.param.time_limit = time_limit

        optimizer.solve()

        #
        # Write the solution in a file with the following format:
        # - total makespan
        # - for each task, the task id, the selected resource, the start and end times
        #
        if output_file != None:
            with open(output_file, "w") as f:
                print("Solution written in file", output_file)
                f.write(str(makespan.value) + "\n")
                for i in range(nb_tasks):
                    f.write(
                        str(i) + " " + str(task_resource[i].value) + " " + str(tasks[i].value.start()) + " " +
                        str(tasks[i].value.end()))
                    f.write("\n")


if __name__ == '__main__':
    if len(sys.argv) < 2:
        print("Usage: python flexible_cumulative.py instance_file [output_file] [time_limit]")
        sys.exit(1)

    instance_file = sys.argv[1]
    output_file = sys.argv[2] if len(sys.argv) >= 3 else None
    time_limit = int(sys.argv[3]) if len(sys.argv) >= 4 else 60
    main(instance_file, output_file, time_limit)
Compilation / Execution (Windows)
cl /EHsc flexible_cumulative.cpp -I%HX_HOME%\include /link %HX_HOME%\bin\hexaly130.lib
flexible_cumulative instances\pat1.fc
Compilation / Execution (Linux)
g++ flexible_cumulative.cpp -I/opt/hexaly_13_0/include -lhexaly130 -lpthread -o flexible_cumulative
./flexible_cumulative instances/pat1.fc
#include "optimizer/hexalyoptimizer.h"
#include <algorithm>
#include <fstream>
#include <iostream>
#include <limits>
#include <numeric>
#include <vector>

using namespace hexaly;

class FlexibleCumulative {
private:
    // Number of tasks
    int nbTasks;
    // Number of resources
    int nbResources;
    // Maximum capacity of each resource
    std::vector<int> capacity;
    // Duration of task i if task i is done by resource r
    std::vector<std::vector<int>> taskProcessingTimeData;
    // Resource weight of resource r required for task i
    std::vector<std::vector<int>> weightData;
    // Number of successors
    std::vector<int> nbSuccessors;
    // Successors for each task i
    std::vector<std::vector<int>> successors;
    // Trivial upper bound for the start times of the tasks
    int horizon = 0;

    // Hexaly Optimizer
    HexalyOptimizer optimizer;
    // Decision variables: set of tasks done by each resource
    std::vector<HxExpression> resourcesTasks;
    // Decision variables: time range of each task
    std::vector<HxExpression> tasks;
    // For each task, the selected resource
    std::vector<HxExpression> taskResource;
    // Objective = minimize the makespan: end of the last task
    HxExpression makespan;

public:
    FlexibleCumulative(const std::string& fileName) : optimizer() {}

    void readInstance(const std::string& fileName) {
        std::ifstream infile;
        infile.exceptions(std::ifstream::failbit | std::ifstream::badbit);
        infile.open(fileName.c_str());

        infile >> nbTasks;
        infile >> nbResources;

        capacity.resize(nbResources);
        for (int r = 0; r < nbResources; ++r) {
            infile >> capacity[r];
        }

        taskProcessingTimeData.resize(nbTasks);
        weightData.resize(nbResources);

        for (int i = 0; i < nbTasks; ++i) {
            taskProcessingTimeData[i].resize(nbResources);
        }

        for (int r = 0; r < nbResources; ++r) {
            weightData[r].resize(nbTasks);
        }

        for (int i = 0; i < nbTasks; ++i) {
            for (int r = 0; r < nbResources; ++r) {
                infile >> taskProcessingTimeData[i][r];
                std::cout << i << " " << r << " " << taskProcessingTimeData[i][r] << std::endl;
                infile >> weightData[r][i];
            }
        }

        nbSuccessors.resize(nbTasks);
        successors.resize(nbTasks);

        for (int i = 0; i < nbTasks; ++i) {
            infile >> nbSuccessors[i];
            successors[i].resize(nbSuccessors[i]);
            for (int s = 0; s < nbSuccessors[i]; ++s) {
                infile >> successors[i][s];
            }
            horizon += *(std::max_element(taskProcessingTimeData[i].begin(), taskProcessingTimeData[i].end()));
        }

        infile.close();
    }

    void solve(int TimeLimit) {
        // Declare the optimization model
        HxModel model = optimizer.getModel();

        resourcesTasks.resize(nbResources);
        HxExpression resources = model.array();
        for (int r = 0; r < nbResources; ++r) {
            resourcesTasks[r] = model.setVar(nbTasks);
            resources.addOperand(resourcesTasks[r]);
        }

        // Create Hexaly arrays to be able to access them with "at" operators
        HxExpression weight = model.array();
        for (int r = 0; r < nbResources; ++r) {
            weight.addOperand(model.array(weightData[r].begin(), weightData[r].end()));
        }
        HxExpression taskProcessingTime = model.array();
        for (int i = 0; i < nbTasks; ++i) {
            taskProcessingTime.addOperand(
                model.array(taskProcessingTimeData[i].begin(), taskProcessingTimeData[i].end()));
        }

        // Only compatible resources can be selected for a task
        for (int i = 0; i < nbTasks; ++i) {
            for (int r = 0; r < nbResources; ++r) {
                if (taskProcessingTimeData[i][r] == 0 && weightData[r][i] == 0) {
                    model.constraint(model.contains(resourcesTasks[r], i) == 0);
                }
            }
        }

        // All tasks are scheduled on the resources
        model.constraint(model.partition(resources));

        taskResource.resize(nbTasks);
        for (int i = 0; i < nbTasks; ++i) {
            // For each task, the selected resource
            taskResource[i] = model.find(resources, i);
        }

        tasks.resize(nbTasks);
        std::vector<HxExpression> duration(nbTasks);
        for (int i = 0; i < nbTasks; ++i) {
            // Interval decisions: time range of each task
            tasks[i] = model.intervalVar(0, horizon);

            // Task duration constraints
            model.constraint(model.eq(model.length(tasks[i]), taskProcessingTime[i][taskResource[i]]));
        }

        // Precedence constraints between the tasks
        for (int i = 0; i < nbTasks; ++i) {
            for (int s = 0; s < nbSuccessors[i]; ++s) {
                model.constraint(tasks[i] < tasks[successors[i][s]]);
            }
        }

        // Makespan: end of the last task
        makespan = model.max();
        for (int i = 0; i < nbTasks; ++i) {
            makespan.addOperand(model.end(tasks[i]));
        }

        // Create an Hexaly array to be able to access it with "at" operator
        HxExpression tasksArray = model.array();
        for (int i = 0; i < nbTasks; ++i) {
            tasksArray.addOperand(tasks[i]);
        }

        // Cumulative resource constraints
        for (int r = 0; r < nbResources; ++r) {
            HxExpression capacityRespected = model.createLambdaFunction([&](HxExpression t) {
                HxExpression taskWeight = model.createLambdaFunction([&](HxExpression i) {
                    return model.at(weight, r, i) * model.contains(tasksArray[i], t);
                });
                HxExpression totalWeight = model.sum(resourcesTasks[r], taskWeight);
                return model.leq(totalWeight, capacity[r]);
            });
            model.constraint(model.and_(model.range(0, makespan), capacityRespected));
        }

        // Minimize the makespan
        model.minimize(makespan);

        model.close();

        // Parameterize the optimizer
        optimizer.getParam().setTimeLimit(TimeLimit);

        optimizer.solve();
    }

    /* Write the solution in a file with the following format:
     *  - total makespan
     *  - for each task, the task id, the selected resource, the start and end times */
    void writeSolution(const std::string& fileName) {
        std::ofstream outfile(fileName.c_str());
        if (!outfile.is_open()) {
            std::cerr << "File " << fileName << " cannot be opened." << std::endl;
            exit(1);
        }
        std::cout << "Solution written in file " << fileName << std::endl;

        outfile << makespan.getValue() << std::endl;
        for (int i = 0; i < nbTasks; ++i) {
            outfile << i << " " << taskResource[i].getValue() << " " << tasks[i].getIntervalValue().getStart() << " "
                    << tasks[i].getIntervalValue().getEnd() << std::endl;
        }
        outfile.close();
    }
};

int main(int argc, char** argv) {
    if (argc < 2) {
        std::cout << "Usage: flexible_cumulative instanceFile [outputFile] [timeLimit]" << std::endl;
        exit(1);
    }

    const char* instanceFile = argv[1];
    const char* outputFile = argc > 2 ? argv[2] : NULL;
    const char* strTimeLimit = argc > 3 ? argv[3] : "60";

    FlexibleCumulative model(instanceFile);
    try {
        model.readInstance(instanceFile);
        const int timeLimit = atoi(strTimeLimit);
        model.solve(timeLimit);
        if (outputFile != NULL)
            model.writeSolution(outputFile);
        return 0;
    } catch (const std::exception& e) {
        std::cerr << "An error occurred: " << e.what() << std::endl;
        return 1;
    }
}
Compilation / Execution (Windows)
copy %HX_HOME%\bin\Hexaly.NET.dll .
csc FlexibleCumulative.cs /reference:Hexaly.NET.dll
FlexibleCumulative instances\pat1.fc
using System;
using System.IO;
using System.Linq;
using Hexaly.Optimizer;

public class FlexibleCumulative : IDisposable
{
    // Number of tasks
    private int nbTasks;

    // Number of resources
    private int nbResources;

    // Maximum capacity of each resource
    private int[] capacity;

    // Duration of task i if task i is done by resource r
    private int[][] taskProcessingTimeData;

    // Resource weight of resource r required for task i
    private int[][] weightData;

    // Number of successors
    private int[] nbSuccessors;

    // Successors for each task i
    private int[][] successors;

    // Trivial upper bound for the start times of the tasks
    private int horizon = 0;

    // Hexaly Optimizer
    private HexalyOptimizer optimizer;

    // Decision variables: set of tasks done by each resource
    private HxExpression[] resourcesTasks;

    // Decision variables: time range of each task
    private HxExpression[] tasks;

    // For each task, the selected resource
    private HxExpression[] taskResource;

    // Objective = minimize the makespan: end of the last task
    private HxExpression makespan;

    public FlexibleCumulative(string fileName)
    {
        optimizer = new HexalyOptimizer();
    }

    private static string[] ReadNextLine(StreamReader input) {
        return input.ReadLine().Split(new[] { ' ' }, StringSplitOptions.RemoveEmptyEntries);
    }

    private void ReadInstance(string fileName)
    {
        using (StreamReader input = new StreamReader(fileName))
        {
            string[] splitted = ReadNextLine(input);
            if (splitted.Length == 0)
                splitted = ReadNextLine(input);
            nbTasks = int.Parse(splitted[0]);
            nbResources = int.Parse(splitted[1]);

            capacity = new int[nbResources];
            splitted = ReadNextLine(input);
            for (int r = 0; r < nbResources; ++r)
                capacity[r] = int.Parse(splitted[r]);

            taskProcessingTimeData = new int[nbTasks][];
            for (int i = 0; i < nbTasks; i++)
                taskProcessingTimeData[i] = new int[nbResources];

            weightData = new int[nbResources][];
            for (int r = 0; r < nbResources; r++)
                weightData[r] = new int[nbTasks];

            for (int i = 0; i < nbTasks; ++i)
            {
                splitted = ReadNextLine(input);
                if (splitted.Length == 0)
                    splitted = ReadNextLine(input);
                for (int r = 0; r < nbResources; ++r)
                {
                    taskProcessingTimeData[i][r] = int.Parse(splitted[2 * r]);
                    weightData[r][i] = int.Parse(splitted[2 * r + 1]);
                }
            }

            nbSuccessors = new int[nbTasks];
            successors = new int[nbTasks][];

            for (int i = 0; i < nbTasks; ++i)
            {
                splitted = ReadNextLine(input);
                if (splitted.Length == 0)
                    splitted = ReadNextLine(input);
                nbSuccessors[i] = int.Parse(splitted[0]);
                successors[i] = new int[nbSuccessors[i]];
                for (int s = 0; s < nbSuccessors[i]; ++s)
                    successors[i][s] = int.Parse(splitted[s + 1]);
                horizon += (taskProcessingTimeData[i]).Max();
            }
        }
    }

    public void Dispose()
    {
        optimizer.Dispose();
    }

    public void Solve(int timeLimit)
    {
        // Declare the optimization model
        HxModel model = optimizer.GetModel();

        resourcesTasks = new HxExpression[nbResources];

        for (int r = 0; r < nbResources; ++r)
            resourcesTasks[r] = model.Set(nbTasks);

        // Create HexalyOptimizer arrays to be able to access them with "at" operators
        HxExpression resources = model.Array(resourcesTasks);
        HxExpression weight = model.Array(weightData);
        HxExpression taskProcessingTime = model.Array(taskProcessingTimeData);

        // Only compatible resources can be selected for a task
        for (int i = 0; i < nbTasks; ++i)
        {
            for (int r = 0; r < nbResources; ++r)
            {
                if (taskProcessingTimeData[i][r] == 0 && weightData[r][i] == 0)
                {
                    model.Constraint(model.Contains(resourcesTasks[r], i) == 0);
                }
            }
        }

        // All tasks are scheduled on the resources
        model.Constraint(model.Partition(resources));

        taskResource = new HxExpression[nbTasks];
        for (int i = 0; i < nbTasks; ++i)
        {
            // For each task, the selected resource
            taskResource[i] = model.Find(resources, i);
        }

        tasks = new HxExpression[nbTasks];
        for (int i = 0; i < nbTasks; ++i)
        {
            // Interval decisions: time range of each task
            tasks[i] = model.Interval(0, horizon);

            // Task duration constraints
            HxExpression iExpr = model.CreateConstant(i);
            HxExpression duration = model.At(taskProcessingTime, iExpr, taskResource[i]);
            model.Constraint(model.Length(tasks[i]) == duration);
        }

        // Create a HexalyOptimizer array to be able to access it with "at" operator
        HxExpression tasksArray = model.Array();
        for (int i = 0; i < nbTasks; ++i)
            tasksArray.AddOperand(tasks[i]);

        // Precedence constraints between the tasks
        for (int i = 0; i < nbTasks; ++i)
        {
            for (int s = 0; s < nbSuccessors[i]; ++s)
            {
                model.Constraint(tasks[i] < tasks[successors[i][s]]);
            }
        }

        // Makespan: end of the last task
        makespan = model.Max();
        for (int i = 0; i < nbTasks; ++i)
            makespan.AddOperand(model.End(tasks[i]));

        // Cumulative resource constraints
        for (int r = 0; r < nbResources; ++r)
        {
            HxExpression capacityRespected = model.LambdaFunction(t =>
                {
                    HxExpression taskWeight = model.LambdaFunction(i =>
                    {
                        HxExpression rExpr = model.CreateConstant(r);
                        return model.At(weight, rExpr, i) * model.Contains(tasksArray[i], t);
                    });
                    HxExpression totalWeight = model.Sum(resourcesTasks[r], taskWeight);
                    return totalWeight <= capacity[r];
                });
            model.Constraint(model.And(model.Range(0, makespan), capacityRespected));
        }

        // Minimize the makespan
        model.Minimize(makespan);

        model.Close();

        // Parameterize the optimizer
        optimizer.GetParam().SetTimeLimit(timeLimit);

        optimizer.Solve();
    }

    /* Write the solution in a file with the following format:
     *  - total makespan
     *  - for each task, the task id, the selected resource, the start and end times */
    public void WriteSolution(string fileName)
    {
        using (StreamWriter output = new StreamWriter(fileName))
        {
            Console.WriteLine("Solution written in file " + fileName);
            output.WriteLine(makespan.GetValue());
            for (int i = 0; i < nbTasks; ++i)
            {
                output.Write(i + " " + taskResource[i].GetValue() + " " + tasks[i].GetIntervalValue().Start() + " "
                        + tasks[i].GetIntervalValue().End());
                output.WriteLine();
            }
        }
    }

    public static void Main(string[] args)
    {
        if (args.Length < 1)
        {
            Console.WriteLine("Usage: FlexibleCumulative instanceFile [outputFile] [timeLimit]");
            System.Environment.Exit(1);
        }

        string instanceFile = args[0];
        string outputFile = args.Length > 1 ? args[1] : null;
        string strTimeLimit = args.Length > 2 ? args[2] : "60";

        using (FlexibleCumulative model = new FlexibleCumulative(instanceFile))
        {
            model.ReadInstance(instanceFile);
            model.Solve(int.Parse(strTimeLimit));
            if (outputFile != null)
                model.WriteSolution(outputFile);
        }
    }
}
Compilation / Execution (Windows)
javac FlexibleCumulative.java -cp %HX_HOME%\bin\hexaly.jar
java -cp %HX_HOME%\bin\hexaly.jar;. FlexibleCumulative instances\pat1.fc
Compilation / Execution (Linux)
javac FlexibleCumulative.java -cp /opt/hexaly_13_0/bin/hexaly.jar
java -cp /opt/hexaly_13_0/bin/hexaly.jar:. FlexibleCumulative instances/pat1.fc
import java.util.*;
import java.io.*;
import com.hexaly.optimizer.*;

public class FlexibleCumulative {
    // Number of tasks
    private int nbTasks;
    // Number of resources
    private int nbResources;
    // Maximum capacity of each resource
    private int[] capacity;
    // Duration of task i if task i is done by resource r
    private int[][] taskProcessingTimeData;
    // Resource weight of resource r required for task i
    private int[][] weightData;
    // Number of successors
    private int[] nbSuccessors;
    // Successors for each task i
    private int[][] successors;
    // Trivial upper bound for the start times of the tasks
    private int horizon = 0;

    // Hexaly Optimizer
    final HexalyOptimizer optimizer;
    // Decision variables: set of tasks done by each resource
    private HxExpression[] resourcesTasks;
    // Decision variables: time range of each task
    private HxExpression[] tasks;
    // For each task, the selected resource
    private HxExpression[] taskResource;
    // Objective = minimize the makespan: end of the last task
    private HxExpression makespan;

    public FlexibleCumulative(HexalyOptimizer optimizer, String fileName) throws IOException {
        this.optimizer = optimizer;
    }

    private void readInstance(String fileName) throws IOException {
        try (Scanner input = new Scanner(new File(fileName))) {
            nbTasks = input.nextInt();
            nbResources = input.nextInt();

            capacity = new int[nbResources];
            for (int r = 0; r < nbResources; ++r) {
                capacity[r] = input.nextInt();
            }

            taskProcessingTimeData = new int[nbTasks][nbResources];
            weightData = new int[nbResources][nbTasks];
            nbSuccessors = new int[nbTasks];
            successors = new int[nbTasks][];
            for (int i = 0; i < nbTasks; ++i) {
                for (int r = 0; r < nbResources; ++r) {
                    taskProcessingTimeData[i][r] = input.nextInt();
                    weightData[r][i] = input.nextInt();
                }
            }
            for (int i = 0; i < nbTasks; ++i) {
                nbSuccessors[i] = input.nextInt();
                successors[i] = new int[nbSuccessors[i]];
                for (int s = 0; s < nbSuccessors[i]; ++s) {
                    successors[i][s] = input.nextInt();
                }

                int maxDurationTask = taskProcessingTimeData[i][0];
                for (int r = 0; r < nbResources; ++r) {
                    if (taskProcessingTimeData[i][r] > taskProcessingTimeData[i][0]) {
                        maxDurationTask = taskProcessingTimeData[i][r];
                    }
                }
                horizon += maxDurationTask;
            }
        }
    }

    public void solve(int timeLimit) {
        // Declare the optimization model
        HxModel model = optimizer.getModel();

        resourcesTasks = new HxExpression[nbResources];

        for (int r = 0; r < nbResources; ++r) {
            resourcesTasks[r] = model.setVar(nbTasks);
        }
        // Create HexalyOptimizer arrays to be able to access them with "at" operators
        HxExpression resources = model.array(resourcesTasks);
        HxExpression weight = model.array(weightData);
        HxExpression taskProcessingTime = model.array(taskProcessingTimeData);

        // Only compatible resources can be selected for a task
        for (int i = 0; i < nbTasks; ++i) {
            for (int r = 0; r < nbResources; ++r) {
                if (taskProcessingTimeData[i][r] == 0 && weightData[r][i] == 0) {
                    model.constraint(model.eq(model.contains(resourcesTasks[r], i), 0));
                }
            }
        }
        
        // All tasks are scheduled on the resources
        model.constraint(model.partition(resources));
        
        taskResource = new HxExpression[nbTasks];
        for (int i = 0; i < nbTasks; ++i) {
            // For each task, the selected resource
            taskResource[i] = model.find(resources, i);
        }

        tasks = new HxExpression[nbTasks];
        for (int i = 0; i < nbTasks; ++i) {
            // Interval decisions: time range of each task
            tasks[i] = model.intervalVar(0, horizon);

            // Task duration constraints
            HxExpression iExpr = model.createConstant(i);
            HxExpression duration = model.at(taskProcessingTime, iExpr, taskResource[i]);
            model.constraint(model.eq(model.length(tasks[i]), duration));
        }

        // Create a HexalyOptimizer array to be able to access it with "at" operator
        HxExpression tasksArray = model.array();
        for (int i = 0; i < nbTasks; ++i) {
            tasksArray.addOperand(tasks[i]);
        }

        // Precedence constraints between the tasks
        for (int i = 0; i < nbTasks; ++i) {
            for (int s = 0; s < nbSuccessors[i]; ++s) {
                model.constraint(model.lt(tasks[i], tasks[successors[i][s]]));
            }
        }

        // Makespan: end of the last task
        makespan = model.max();
        for (int i = 0; i < nbTasks; ++i) {
            makespan.addOperand(model.end(tasks[i]));
        }

        // Cumulative resource constraints
        for (int r = 0; r < nbResources; ++r) {
            final int rL = r;
            HxExpression capacityRespected = model.lambdaFunction(t -> {
                HxExpression taskWeight = model.lambdaFunction(i -> {
                    HxExpression rExpr = model.createConstant(rL);
                    return model.prod(
                           model.at(weight, rExpr, i),
                           model.contains(model.at(tasksArray, i), t));
                });
                HxExpression totalWeight = model.sum(resourcesTasks[rL], taskWeight);
                return model.leq(totalWeight, capacity[rL]);
            });
            model.constraint(model.and(model.range(0, makespan), capacityRespected));
        }

        // Minimize the makespan
        model.minimize(makespan);
        model.close();

        // Parameterize the optimizer
        optimizer.getParam().setTimeLimit(timeLimit);

        optimizer.solve();
    }

    /*
     * Write the solution in a file with the following format:
     * - total makespan
     * - for each task, the task id, the selected resource, the start and end times
     */
    public void writeSolution(String fileName) throws IOException {
        try (PrintWriter output = new PrintWriter(fileName)) {
            System.out.println("Solution written in file " + fileName);

            output.println(makespan.getValue());

            for (int i = 0; i < nbTasks; ++i) {
                output.println(i + " " + taskResource[i].getValue() + " " + tasks[i].getIntervalValue().getStart() + " "
                        + tasks[i].getIntervalValue().getEnd());
            }
        }
    }

    public static void main(String[] args) {
        if (args.length < 1) {
            System.out.println("Usage: java FlexibleCumulative instanceFile [outputFile] [timeLimit]");
            System.exit(1);
        }

        String instanceFile = args[0];
        String outputFile = args.length > 1 ? args[1] : null;
        String strTimeLimit = args.length > 2 ? args[2] : "60";

        try (HexalyOptimizer optimizer = new HexalyOptimizer()) {
            FlexibleCumulative model = new FlexibleCumulative(optimizer, instanceFile);
            model.readInstance(instanceFile);
            model.solve(Integer.parseInt(strTimeLimit));
            if (outputFile != null) {
                model.writeSolution(outputFile);
            }
        } catch (Exception ex) {
            System.err.println(ex);
            ex.printStackTrace();
            System.exit(1);
        }
    }
}