1. Test Modules
  2. Training Characteristics
    1. Input Learning
      1. Gradient Descent
      2. Conjugate Gradient Descent
      3. Limited-Memory BFGS
    2. Results
  3. Results

Target Description: The type Sum inputs layer.

Report Description: The type N 1 test.

Subreport: Logs for com.simiacryptus.ref.lang.ReferenceCountingBase

Test Modules

Using Seed 5724058019111185408

Training Characteristics

Input Learning

In this apply, we use a network to learn this target input, given it's pre-evaluated output:

TrainingTester.java:445 executed in 0.00 seconds (0.000 gc):

    return RefArrays.stream(RefUtil.addRef(input_target)).flatMap(RefArrays::stream).map(x -> {
      try {
        return x.prettyPrint();
      } finally {
        x.freeRef();
      }
    }).reduce((a, b) -> a + "\n" + b).orElse("");

Returns

    [ 1.556, -1.424, 1.368, 0.3, -1.764, -1.54, -1.476, -1.116, ... ]
    [ 1.776 ]
    [ -1.764, 0.048, -1.028, 1.62, -1.456, -1.616, -0.408, -0.852, ... ]
    [ 1.776 ]
    [ -1.688, 0.496, -1.228, 1.556, -1.54, -0.472, 0.692, 1.764, ... ]
    [ 1.776 ]
    [ -1.492, -0.012, -0.636, 0.184, 1.512, 0.148, 1.64, -0.804, ... ]
    [ 1.776 ]
    [ -0.032, 1.916, 0.048, -0.636, 0.972, 1.42, 0.788, -0.384, ... ]
    [ 1.776 ]

Gradient Descent

First, we train using basic gradient descent method apply weak line search conditions.

TrainingTester.java:638 executed in 0.62 seconds (0.000 gc):

    IterativeTrainer iterativeTrainer = new IterativeTrainer(trainable.addRef());
    try {
      iterativeTrainer.setLineSearchFactory(label -> new ArmijoWolfeSearch());
      iterativeTrainer.setOrientation(new GradientDescent());
      iterativeTrainer.setMonitor(TrainingTester.getMonitor(history));
      iterativeTrainer.setTimeout(30, TimeUnit.SECONDS);
      iterativeTrainer.setMaxIterations(250);
      iterativeTrainer.setTerminateThreshold(0);
      return iterativeTrainer.run();
    } finally {
      iterativeTrainer.freeRef();
    }
Logging
Reset training subject: 3988873204228
BACKPROP_AGG_SIZE = 3
THREADS = 64
SINGLE_THREADED = false
Initialized CoreSettings = {
"backpropAggregationSize" : 3,
"jvmThreads" : 64,
"singleThreaded" : false
}
Reset training subject: 3988914406510
Constructing line search parameters: GD
th(0)=220.74724349405838;dx=-1.009897600108797E25
New Minimum: 220.74724349405838 > 0.0
Armijo: th(2.154434690031884)=0.0; dx=-2.044844723369055E14 evalInputDelta=220.74724349405838
Armijo: th(1.077217345015942)=0.0; dx=-2.044844723369055E14 evalInputDelta=220.74724349405838
Armijo: th(0.3590724483386473)=0.0; dx=-2.044844723369055E14 evalInputDelta=220.74724349405838
Armijo: th(0.08976811208466183)=0.0; dx=-2.044844723369055E14 evalInputDelta=220.74724349405838
Armijo: th(0.017953622416932366)=0.0; dx=-2.044844723369055E14 evalInputDelta=220.74724349405838
Armijo: th(0.002992270402822061)=0.0; dx=-2.044844723369055E14 evalInputDelta=220.74724349405838
Armijo: th(4.2746720040315154E-4)=0.0; dx=-2.044844723369055E14 evalInputDelta=220.74724349405838
Armijo: th(5.343340005039394E-5)=0.0; dx=-2.044844723369055E14 evalInputDelta=220.74724349405838
Armijo: th(5.9370444500437714E-6)=0.0; dx=-2.044844723369055E14 evalInputDelta=220.74724349405838
Armijo: th(5.937044450043771E-7)=0.0; dx=-2.044844723369055E14 evalInputDelta=220.74724349405838
Armijo: th(5.397313136403428E-8)=0.0; dx=-2.044844723369055E14 evalInputDelta=220.74724349405838
Armijo: th(4.4977609470028565E-9)=0.0; dx=-2.044844723369055E14 evalInputDelta=220.74724349405838
Armijo: th(3.4598161130791205E-10)=0.0; dx=-2.044844723369055E14 evalInputDelta=220.74724349405838
Armijo: th(2.4712972236279432E-11)=0.0; dx=-2.044844723369055E14 evalInputDelta=220.74724349405838
Armijo: th(1.6475314824186289E-12)=0.0; dx=-2.044844723369055E14 evalInputDelta=220.74724349405838
Armijo: th(1.029707176511643E-13)=102.88845215726217; dx=-3.1230176005488317E24 evalInputDelta=117.85879133679622
Armijo: th(6.057101038303783E-15)=218.54719386303668; dx=-1.0098976001009574E25 evalInputDelta=2.200049631021699
MIN ALPHA (3.3650561323909904E-16): th(2.154434690031884)=0.0
Fitness changed from 220.74724349405838 to 0.0
Iteration 1 complete. Error: 0.0 Total: 0.5760; Orientation: 0.0040; Line Search: 0.5168
th(0)=0.0;dx=-6238.419052799998
Armijo: th(2.154434690031884E-15)=0.0; dx=-6238.419052799999 evalInputDelta=0.0
Armijo: th(1.077217345015942E-15)=0.0; dx=-6238.419052799999 evalInputDelta=0.0
MIN ALPHA (3.5907244833864734E-16): th(0.0)=0.0
Fitness changed from 0.0 to 0.0
Static Iteration Total: 0.0338; Orientation: 0.0019; Line Search: 0.0268
Iteration 2 failed. Error: 0.0
Previous Error: 0.0 -> 0.0
Optimization terminated 2
Final threshold in iteration 2: 0.0 (> 0.0) after 0.611s (< 30.000s)

Returns

    0.0

Training Converged

Conjugate Gradient Descent

First, we use a conjugate gradient descent method, which converges the fastest for purely linear functions.

TrainingTester.java:603 executed in 0.73 seconds (0.000 gc):

    IterativeTrainer iterativeTrainer = new IterativeTrainer(trainable.addRef());
    try {
      iterativeTrainer.setLineSearchFactory(label -> new QuadraticSearch());
      iterativeTrainer.setOrientation(new GradientDescent());
      iterativeTrainer.setMonitor(TrainingTester.getMonitor(history));
      iterativeTrainer.setTimeout(30, TimeUnit.SECONDS);
      iterativeTrainer.setMaxIterations(250);
      iterativeTrainer.setTerminateThreshold(0);
      return iterativeTrainer.run();
    } finally {
      iterativeTrainer.freeRef();
    }
Logging
Reset training subject: 3989489775835
Reset training subject: 3989493470892
Constructing line search parameters: GD
F(0.0) = LineSearchPoint{point=PointSample{avg=220.74724349405838}, derivative=-1.009897600108797E25}
New Minimum: 220.74724349405838 > 0.0
F(1.0E-10) = LineSearchPoint{point=PointSample{avg=0.0}, derivative=-2.044844723369055E14}, evalInputDelta = -220.74724349405838
F(7.000000000000001E-10) = LineSearchPoint{point=PointSample{avg=0.0}, derivative=-2.044844723369055E14}, evalInputDelta = -220.74724349405838
F(4.900000000000001E-9) = LineSearchPoint{point=PointSample{avg=0.0}, derivative=-2.044844723369055E14}, evalInputDelta = -220.74724349405838
F(3.430000000000001E-8) = LineSearchPoint{point=PointSample{avg=0.0}, derivative=-2.044844723369055E14}, evalInputDelta = -220.74724349405838
F(2.4010000000000004E-7) = LineSearchPoint{point=PointSample{avg=0.0}, derivative=-2.044844723369055E14}, evalInputDelta = -220.74724349405838
F(1.6807000000000003E-6) = LineSearchPoint{point=PointSample{avg=0.0}, derivative=-2.044844723369055E14}, evalInputDelta = -220.74724349405838
F(1.1764900000000001E-5) = LineSearchPoint{point=PointSample{avg=0.0}, derivative=-2.044844723369055E14}, evalInputDelta = -220.74724349405838
F(8.235430000000001E-5) = LineSearchPoint{point=PointSample{avg=0.0}, derivative=-2.044844723369055E14}, evalInputDelta = -220.74724349405838
F(5.764801000000001E-4) = LineSearchPoint{point=PointSample{avg=0.0}, derivative=-2.044844723369055E14}, evalInputDelta = -220.74724349405838
F(0.004035360700000001) = LineSearchPoint{point=PointSample{avg=0.0}, derivative=-2.044844723369055E14}, evalInputDelta = -220.74724349405838
F(0.028247524900000005) = LineSearchPoint{point=PointSample{avg=0.0}, derivative=-2.044844723369055E14}, evalInputDelta = -220.74724349405838
F(0.19773267430000002) = LineSearchPoint{point=PointSample{avg=0.0}, derivative=-2.044844723369055E14}, evalInputDelta = -220.74724349405838
F(1.3841287201) = LineSearchPoint{point=PointSample{avg=0.0}, derivative=-2.044844723369055E14}, evalInputDelta = -220.74724349405838
F(9.688901040700001) = LineSearchPoint{point=PointSample{avg=0.0}, derivative=-2.044844723369055E14}, evalInputDelta = -220.74724349405838
F(67.8223072849) = LineSearchPoint{point=PointSample{avg=0.0}, derivative=-2.044844723369055E14}, evalInputDelta = -220.74724349405838
F(474.7561509943) = LineSearchPoint{point=PointSample{avg=0.0}, derivative=-2.044844723369055E14}, evalInputDelta = -220.74724349405838
F(3323.2930569601003) = LineSearchPoint{point=PointSample{avg=0.0}, derivative=-2.044844723369055E14}, evalInputDelta = -220.74724349405838
F(23263.0513987207) = LineSearchPoint{point=PointSample{avg=0.0}, derivative=-2.044844723369055E14}, evalInputDelta = -220.74724349405838
F(162841.3597910449) = LineSearchPoint{point=PointSample{avg=0.0}, derivative=-2.044844723369055E14}, evalInputDelta = -220.74724349405838
F(1139889.5185373144) = LineSearchPoint{point=PointSample{avg=0.0}, derivative=-2.044844723369055E14}, evalInputDelta = -220.74724349405838
F(7979226.6297612) = LineSearchPoint{point=PointSample{avg=0.0}, derivative=-2.044844723369055E14}, evalInputDelta = -220.74724349405838
F(5.58545864083284E7) = LineSearchPoint{point=PointSample{avg=0.0}, derivative=-2.044844723369055E14}, evalInputDelta = -220.74724349405838
F(3.909821048582988E8) = LineSearchPoint{point=PointSample{avg=0.0}, derivative=-2.044844723369055E14}, evalInputDelta = -220.74724349405838
F(2.7368747340080914E9) = LineSearchPoint{point=PointSample{avg=0.0}, derivative=-2.044844723369055E14}, evalInputDelta = -220.74724349405838
F(1.915812313805664E10) = LineSearchPoint{point=PointSample{avg=0.0}, derivative=-2.044844723369055E14}, evalInputDelta = -220.74724349405838
0.0 <= 220.74724349405838
F(1.0E10) = LineSearchPoint{point=PointSample{avg=0.0}, derivative=-2.044844723369055E14}, evalInputDelta = -220.74724349405838
Right bracket at 1.0E10
Converged to right
Fitness changed from 220.74724349405838 to 0.0
Iteration 1 complete. Error: 0.0 Total: 0.7139; Orientation: 0.0018; Line Search: 0.7014
F(0.0) = LineSearchPoint{point=PointSample{avg=0.0}, derivative=-6238.419052799998}
F(1.0E10) = LineSearchPoint{point=PointSample{avg=0.0}, derivative=-6238.419052799999}, evalInputDelta = 0.0
0.0 <= 0.0
Converged to right
Fitness changed from 0.0 to 0.0
Static Iteration Total: 0.0169; Orientation: 0.0015; Line Search: 0.0128
Iteration 2 failed. Error: 0.0
Previous Error: 0.0 -> 0.0
Optimization terminated 2
Final threshold in iteration 2: 0.0 (> 0.0) after 0.731s (< 30.000s)

Returns

    0.0

Training Converged

Limited-Memory BFGS

Next, we apply the same optimization using L-BFGS, which is nearly ideal for purely second-order or quadratic functions.

TrainingTester.java:674 executed in 0.37 seconds (0.000 gc):

    IterativeTrainer iterativeTrainer = new IterativeTrainer(trainable.addRef());
    try {
      iterativeTrainer.setLineSearchFactory(label -> new ArmijoWolfeSearch());
      iterativeTrainer.setOrientation(new LBFGS());
      iterativeTrainer.setMonitor(TrainingTester.getMonitor(history));
      iterativeTrainer.setTimeout(30, TimeUnit.SECONDS);
      iterativeTrainer.setIterationsPerSample(100);
      iterativeTrainer.setMaxIterations(250);
      iterativeTrainer.setTerminateThreshold(0);
      return iterativeTrainer.run();
    } finally {
      iterativeTrainer.freeRef();
    }
Logging
Reset training subject: 3990225833755
Reset training subject: 3990228808214
Adding measurement 1bac33b0 to history. Total: 0
LBFGS Accumulation History: 1 points
Constructing line search parameters: GD
Non-optimal measurement 220.74724349405838 < 220.74724349405838. Total: 1
th(0)=220.74724349405838;dx=-1.009897600108797E25
Adding measurement 41100b36 to history. Total: 1
New Minimum: 220.74724349405838 > 0.0
Armijo: th(2.154434690031884)=0.0; dx=-2.044844723369055E14 evalInputDelta=220.74724349405838
Non-optimal measurement 0.0 < 0.0. Total: 2
Armijo: th(1.077217345015942)=0.0; dx=-2.044844723369055E14 evalInputDelta=220.74724349405838
Non-optimal measurement 0.0 < 0.0. Total: 2
Armijo: th(0.3590724483386473)=0.0; dx=-2.044844723369055E14 evalInputDelta=220.74724349405838
Non-optimal measurement 0.0 < 0.0. Total: 2
Armijo: th(0.08976811208466183)=0.0; dx=-2.044844723369055E14 evalInputDelta=220.74724349405838
Non-optimal measurement 0.0 < 0.0. Total: 2
Armijo: th(0.017953622416932366)=0.0; dx=-2.044844723369055E14 evalInputDelta=220.74724349405838
Non-optimal measurement 0.0 < 0.0. Total: 2
Armijo: th(0.002992270402822061)=0.0; dx=-2.044844723369055E14 evalInputDelta=220.74724349405838
Non-optimal measurement 0.0 < 0.0. Total: 2
Armijo: th(4.2746720040315154E-4)=0.0; dx=-2.044844723369055E14 evalInputDelta=220.74724349405838
Non-optimal measurement 0.0 < 0.0. Total: 2
Armijo: th(5.343340005039394E-5)=0.0; dx=-2.044844723369055E14 evalInputDelta=220.74724349405838
Non-optimal measurement 0.0 < 0.0. Total: 2
Armijo: th(5.9370444500437714E-6)=0.0; dx=-2.044844723369055E14 evalInputDelta=220.74724349405838
Non-optimal measurement 0.0 < 0.0. Total: 2
Armijo: th(5.937044450043771E-7)=0.0; dx=-2.044844723369055E14 evalInputDelta=220.74724349405838
Non-optimal measurement 0.0 < 0.0. Total: 2
Armijo: th(5.397313136403428E-8)=0.0; dx=-2.044844723369055E14 evalInputDelta=220.74724349405838
Non-optimal measurement 0.0 < 0.0. Total: 2
Armijo: th(4.4977609470028565E-9)=0.0; dx=-2.044844723369055E14 evalInputDelta=220.74724349405838
Non-optimal measurement 0.0 < 0.0. Total: 2
Armijo: th(3.4598161130791205E-10)=0.0; dx=-2.044844723369055E14 evalInputDelta=220.74724349405838
Non-optimal measurement 0.0 < 0.0. Total: 2
Armijo: th(2.4712972236279432E-11)=0.0; dx=-2.044844723369055E14 evalInputDelta=220.74724349405838
Non-optimal measurement 0.0 < 0.0. Total: 2
Armijo: th(1.6475314824186289E-12)=0.0; dx=-2.044844723369055E14 evalInputDelta=220.74724349405838
Non-optimal measurement 102.88845215726217 < 0.0. Total: 2
Armijo: th(1.029707176511643E-13)=102.88845215726217; dx=-3.1230176005488317E24 evalInputDelta=117.85879133679622
Non-optimal measurement 218.54719386303668 < 0.0. Total: 2
Armijo: th(6.057101038303783E-15)=218.54719386303668; dx=-1.0098976001009574E25 evalInputDelta=2.200049631021699
Non-optimal measurement 0.0 < 0.0. Total: 2
MIN ALPHA (3.3650561323909904E-16): th(2.154434690031884)=0.0
Fitness changed from 220.74724349405838 to 0.0
Iteration 1 complete. Error: 0.0 Total: 0.3401; Orientation: 0.0058; Line Search: 0.3261
Non-optimal measurement 0.0 < 0.0. Total: 2
LBFGS Accumulation History: 2 points
Non-optimal measurement 0.0 < 0.0. Total: 2
th(0)=0.0;dx=-6238.419052799998
Non-optimal measurement 0.0 < 0.0. Total: 2
Armijo: th(2.154434690031884E-15)=0.0; dx=-6238.419052799998 evalInputDelta=0.0
Non-optimal measurement 0.0 < 0.0. Total: 2
Armijo: th(1.077217345015942E-15)=0.0; dx=-6238.419052799998 evalInputDelta=0.0
Non-optimal measurement 0.0 < 0.0. Total: 2
MIN ALPHA (3.5907244833864734E-16): th(0.0)=0.0
Fitness changed from 0.0 to 0.0
Static Iteration Total: 0.0302; Orientation: 0.0030; Line Search: 0.0242
Iteration 2 failed. Error: 0.0
Previous Error: 0.0 -> 0.0
Optimization terminated 2
Final threshold in iteration 2: 0.0 (> 0.0) after 0.370s (< 30.000s)

Returns

    0.0

Training Converged

TrainingTester.java:576 executed in 0.10 seconds (0.000 gc):

    return TestUtil.compare(title + " vs Iteration", runs);
Logging
Plotting range=[0.0, 0.0], [2.0, 1.0]; valueStats=DoubleSummaryStatistics{count=0, sum=0.000000, min=Infinity, average=0.000000, max=-Infinity}
Only 0 points for GD
Only 0 points for CjGD
Only 0 points for LBFGS

Returns

Result

TrainingTester.java:579 executed in 0.00 seconds (0.000 gc):

    return TestUtil.compareTime(title + " vs Time", runs);
Logging
No Data

Results

TrainingTester.java:350 executed in 0.00 seconds (0.000 gc):

    return grid(inputLearning, modelLearning, completeLearning);

Returns

Result

TrainingTester.java:353 executed in 0.00 seconds (0.000 gc):

    return new ComponentResult(null == inputLearning ? null : inputLearning.value,
        null == modelLearning ? null : modelLearning.value, null == completeLearning ? null : completeLearning.value);

Returns

    {"input":{ "LBFGS": { "type": "Converged", "value": 0.0 }, "CjGD": { "type": "Converged", "value": 0.0 }, "GD": { "type": "Converged", "value": 0.0 } }, "model":null, "complete":null}

LayerTests.java:605 executed in 0.00 seconds (0.000 gc):

    throwException(exceptions.addRef());

Results

detailsresult
{"input":{ "LBFGS": { "type": "Converged", "value": 0.0 }, "CjGD": { "type": "Converged", "value": 0.0 }, "GD": { "type": "Converged", "value": 0.0 } }, "model":null, "complete":null}OK
  {
    "result": "OK",
    "performance": {
      "execution_time": "2.264",
      "gc_time": "0.204"
    },
    "created_on": 1587006008201,
    "file_name": "trainingTest",
    "report": {
      "simpleName": "N1Test",
      "canonicalName": "com.simiacryptus.mindseye.layers.java.SumInputsLayerTest.N1Test",
      "link": "https://github.com/SimiaCryptus/mindseye-java/tree/c9a1867488dc7e77a975f095285b5882c0486db6/src/test/java/com/simiacryptus/mindseye/layers/java/SumInputsLayerTest.java",
      "javaDoc": "The type N 1 test."
    },
    "training_analysis": {
      "input": {
        "LBFGS": {
          "type": "Converged",
          "value": 0.0
        },
        "CjGD": {
          "type": "Converged",
          "value": 0.0
        },
        "GD": {
          "type": "Converged",
          "value": 0.0
        }
      }
    },
    "archive": "s3://code.simiacrypt.us/tests/com/simiacryptus/mindseye/layers/java/SumInputsLayer/N1Test/trainingTest/202004160008",
    "id": "231e0883-c9a6-4195-abe4-a1d8a04a0ee5",
    "report_type": "Components",
    "display_name": "Comparative Training",
    "target": {
      "simpleName": "SumInputsLayer",
      "canonicalName": "com.simiacryptus.mindseye.layers.java.SumInputsLayer",
      "link": "https://github.com/SimiaCryptus/mindseye-java/tree/c9a1867488dc7e77a975f095285b5882c0486db6/src/main/java/com/simiacryptus/mindseye/layers/java/SumInputsLayer.java",
      "javaDoc": "The type Sum inputs layer."
    }
  }