1. Test Modules
  2. Training Characteristics
    1. Input Learning
      1. Gradient Descent
      2. Conjugate Gradient Descent
      3. Limited-Memory BFGS
    2. Results
  3. Results

Target Description: The type Img pixel gate layer.

Report Description: The type Basic.

Subreport: Logs for com.simiacryptus.ref.lang.ReferenceCountingBase

Test Modules

Using Seed 8592308219016957952

Training Characteristics

Input Learning

In this apply, we use a network to learn this target input, given it's pre-evaluated output:

TrainingTester.java:445 executed in 0.01 seconds (0.000 gc):

    return RefArrays.stream(RefUtil.addRef(input_target)).flatMap(RefArrays::stream).map(x -> {
      try {
        return x.prettyPrint();
      } finally {
        x.freeRef();
      }
    }).reduce((a, b) -> a + "\n" + b).orElse("");

Returns

    [
    	[ [ -0.608, 0.048, -0.384 ], [ 1.524, 0.7, -1.72 ] ],
    	[ [ 1.764, -1.028, 0.496 ], [ 1.208, -0.128, 0.08 ] ]
    ]
    [
    	[ [ 1.912 ], [ -0.852 ] ],
    	[ [ -1.688 ], [ -0.804 ] ]
    ]
    [
    	[ [ 0.7, -0.608, 1.764 ], [ -1.72, 1.208, -0.128 ] ],
    	[ [ 1.524, 0.496, -1.028 ], [ -0.384, 0.048, 0.08 ] ]
    ]
    [
    	[ [ -0.852 ], [ 1.912 ] ],
    	[ [ -1.688 ], [ -0.804 ] ]
    ]
    [
    	[ [ 0.496, -1.028, 0.08 ], [ 1.208, -0.128, 0.048 ] ],
    	[ [ 0.7, 1.764, 1.524 ], [ -1.72, -0.384, -0.608 ] ]
    ]
    [
    	[ [ -0.804 ], [ 1.912 ] ],
    	[ [ -1.688 ], [ -0.852 ] ]
    ]
    [
    	[ [ 0.7, -1.72, 0.08 ], [ 1.524, 0.496, -0.384 ] ],
    	[ [ 1.764, 0.048, -1.028 ], [ -0.128, -0.608, 1.208 ] ]
    ]
    [
    	[ [ -0.804 ], [ 1.912 ] ],
    	[ [ -1.688 ], [ -0.852 ] ]
    ]
    [
    	[ [ -0.128, 0.048, 0.496 ], [ 0.7, 1.524, -0.384 ] ],
    	[ [ 0.08, 1.208, 1.764 ], [ -1.72, -1.028, -0.608 ] ]
    ]
    [
    	[ [ -1.688 ], [ -0.804 ] ],
    	[ [ -0.852 ], [ 1.912 ] ]
    ]

Gradient Descent

First, we train using basic gradient descent method apply weak line search conditions.

TrainingTester.java:638 executed in 0.29 seconds (0.000 gc):

    IterativeTrainer iterativeTrainer = new IterativeTrainer(trainable.addRef());
    try {
      iterativeTrainer.setLineSearchFactory(label -> new ArmijoWolfeSearch());
      iterativeTrainer.setOrientation(new GradientDescent());
      iterativeTrainer.setMonitor(TrainingTester.getMonitor(history));
      iterativeTrainer.setTimeout(30, TimeUnit.SECONDS);
      iterativeTrainer.setMaxIterations(250);
      iterativeTrainer.setTerminateThreshold(0);
      return iterativeTrainer.run();
    } finally {
      iterativeTrainer.freeRef();
    }
Logging
Reset training subject: 2375734223496
BACKPROP_AGG_SIZE = 3
THREADS = 64
SINGLE_THREADED = false
Initialized CoreSettings = {
"backpropAggregationSize" : 3,
"jvmThreads" : 64,
"singleThreaded" : false
}
Reset training subject: 2375775144027
Constructing line search parameters: GD
th(0)=55.813811825500295;dx=-1.2831394512177885E24
Armijo: th(2.154434690031884)=63.19121688981909; dx=1.941351432478069E35 evalInputDelta=-7.377405064318793
Armijo: th(1.077217345015942)=63.23098639954293; dx=9.70675716232633E34 evalInputDelta=-7.417174574042633
Armijo: th(0.3590724483386473)=63.48235344468897; dx=3.2355857206900894E34 evalInputDelta=-7.668541619188673
Armijo: th(0.08976811208466183)=64.39604945211286; dx=8.088964300764993E33 evalInputDelta=-8.582237626612567
Armijo: th(0.017953622416932366)=64.85357258755599; dx=1.6177928591287525E33 evalInputDelta=-9.039760762055693
Armijo: th(0.002992270402822061)=64.9846351423839; dx=2.696321421212023E32 evalInputDelta=-9.170823316883599
Armijo: th(4.2746720040315154E-4)=65.00935588198757; dx=3.851887634847941E31 evalInputDelta=-9.195544056487279
Armijo: th(5.343340005039394E-5)=65.01302626062918; dx=4.814858423290657E30 evalInputDelta=-9.19921443512888
Armijo: th(5.9370444500437714E-6)=65.01349356710418; dx=5.349831312031964E29 evalInputDelta=-9.199681741603882
Armijo: th(5.937044450043771E-7)=65.01354615646004; dx=5.349716084335708E28 evalInputDelta=-9.199734330959743
Armijo: th(5.397313136403428E-8)=65.01355146871173; dx=4.8622143423632226E27 evalInputDelta=-9.19973964321143
Armijo: th(4.4977609470028565E-9)=65.01355195566994; dx=4.0401091310545235E26 evalInputDelta=-9.199740130169644
Armijo: th(3.4598161130791205E-10)=63.81953820725518; dx=2.989499655326694E25 evalInputDelta=-8.005726381754883
New Minimum: 55.813811825500295 > 29.286518035474426
Armijo: th(2.4712972236279432E-11)=29.286518035474426; dx=1.185477114968172E24 evalInputDelta=26.52729379002587
New Minimum: 29.286518035474426 > 25.290017461762748
Armijo: th(1.6475314824186289E-12)=25.290017461762748; dx=-1.8047501065048973E23 evalInputDelta=30.523794363737547
Armijo: th(1.029707176511643E-13)=44.396612887629495; dx=-6.66998638459297E23 evalInputDelta=11.4171989378708
Armijo: th(6.057101038303783E-15)=55.81393281058014; dx=-1.282593648560979E24 evalInputDelta=-1.2098507984603657E-4
MIN ALPHA (3.3650561323909904E-16): th(1.6475314824186289E-12)=25.290017461762748
Fitness changed from 55.813811825500295 to 25.290017461762748
Iteration 1 complete. Error: 25.290017461762748 Total: 0.2495; Orientation: 0.0046; Line Search: 0.1878
th(0)=25.290017461762748;dx=-1.3152605492324672E23
Armijo: th(2.154434690031884E-15)=25.290049796859808; dx=-1.3149310398340971E23 evalInputDelta=-3.23350970603542E-5
Armijo: th(1.077217345015942E-15)=25.290033628946738; dx=-1.315095794533282E23 evalInputDelta=-1.6167183989779232E-5
MIN ALPHA (3.5907244833864734E-16): th(0.0)=25.290017461762748
Fitness changed from 25.290017461762748 to 25.290017461762748
Static Iteration Total: 0.0302; Orientation: 0.0012; Line Search: 0.0243
Iteration 2 failed. Error: 25.290017461762748
Previous Error: 0.0 -> 25.290017461762748
Optimization terminated 2
Final threshold in iteration 2: 25.290017461762748 (> 0.0) after 0.280s (< 30.000s)

Returns

    25.290017461762748

This training apply resulted in the following configuration:

TrainingTester.java:785 executed in 0.00 seconds (0.000 gc):

    RefList<double[]> state = network.state();
    assert state != null;
    String description = state.stream().map(RefArrays::toString).reduce((a, b) -> a + "\n" + b)
        .orElse("");
    state.freeRef();
    return description;

Returns

    

And regressed input:

TrainingTester.java:797 executed in 0.00 seconds (0.000 gc):

    return RefArrays.stream(RefUtil.addRef(data)).flatMap(x -> {
      return RefArrays.stream(x);
    }).limit(1).map(x -> {
      String temp_18_0015 = x.prettyPrint();
      x.freeRef();
      return temp_18_0015;
    }).reduce((a, b) -> a + "\n" + b).orElse("");

Returns

    [
    	[ [ -0.608, 1.4996864210276633, -1.72 ], [ 0.496, 0.048, -0.12800000000377243 ] ],
    	[ [ 1.208, -0.8851653389358727, -0.384 ], [ -1.028, 0.7000000000000648, 1.764 ] ]
    ]

To produce the following output:

TrainingTester.java:808 executed in 0.00 seconds (0.000 gc):

    Result[] array = ConstantResult.batchResultArray(pop(RefUtil.addRef(data)));
    @Nullable
    Result eval = layer.eval(array);
    assert eval != null;
    TensorList tensorList = Result.getData(eval);
    String temp_18_0016 = tensorList.stream().limit(1).map(x -> {
      String temp_18_0017 = x.prettyPrint();
      x.freeRef();
      return temp_18_0017;
    }).reduce((a, b) -> a + "\n" + b).orElse("");
    tensorList.freeRef();
    return temp_18_0016;

Returns

    [
    	[ [ 0.4608111445682395, -1.1366319344855333, 1.3036104747654145 ], [ -0.42259200000028113, -0.0408960000000272, 0.10905600000328665 ] ],
    	[ [ -1.9838471407850928, 1.4536694758029791, 0.6306269056800295 ], [ -1.9655360000000244, 1.3384000000001404, 3.372768000000042 ] ]
    ]

Conjugate Gradient Descent

First, we use a conjugate gradient descent method, which converges the fastest for purely linear functions.

TrainingTester.java:603 executed in 0.03 seconds (0.000 gc):

    IterativeTrainer iterativeTrainer = new IterativeTrainer(trainable.addRef());
    try {
      iterativeTrainer.setLineSearchFactory(label -> new QuadraticSearch());
      iterativeTrainer.setOrientation(new GradientDescent());
      iterativeTrainer.setMonitor(TrainingTester.getMonitor(history));
      iterativeTrainer.setTimeout(30, TimeUnit.SECONDS);
      iterativeTrainer.setMaxIterations(250);
      iterativeTrainer.setTerminateThreshold(0);
      return iterativeTrainer.run();
    } finally {
      iterativeTrainer.freeRef();
    }
Logging
Reset training subject: 2376029387189
Reset training subject: 2376032306557
Constructing line search parameters: GD
F(0.0) = LineSearchPoint{point=PointSample{avg=55.813811825500295}, derivative=-1.2831394512177885E24}
F(1.0E-10) = LineSearchPoint{point=PointSample{avg=63.35534801285752}, derivative=7.730720776447844E24}, evalInputDelta = 7.541536187357224
63.35534801285752 <= 55.813811825500295
Converged to right
Fitness changed from 55.813811825500295 to 55.813811825500295
Static Iteration Total: 0.0245; Orientation: 0.0011; Line Search: 0.0136
Iteration 1 failed. Error: 55.813811825500295
Previous Error: 0.0 -> 55.813811825500295
Optimization terminated 1
Final threshold in iteration 1: 55.813811825500295 (> 0.0) after 0.025s (< 30.000s)

Returns

    55.813811825500295

This training apply resulted in the following configuration:

TrainingTester.java:785 executed in 0.00 seconds (0.000 gc):

    RefList<double[]> state = network.state();
    assert state != null;
    String description = state.stream().map(RefArrays::toString).reduce((a, b) -> a + "\n" + b)
        .orElse("");
    state.freeRef();
    return description;

Returns

    

And regressed input:

TrainingTester.java:797 executed in 0.00 seconds (0.000 gc):

    return RefArrays.stream(RefUtil.addRef(data)).flatMap(x -> {
      return RefArrays.stream(x);
    }).limit(1).map(x -> {
      String temp_18_0015 = x.prettyPrint();
      x.freeRef();
      return temp_18_0015;
    }).reduce((a, b) -> a + "\n" + b).orElse("");

Returns

    [
    	[ [ -0.608, 1.524, -1.72 ], [ 0.496, 0.048, -0.128 ] ],
    	[ [ 1.208, 0.08, -0.384 ], [ -1.028, 0.7, 1.764 ] ]
    ]

To produce the following output:

TrainingTester.java:808 executed in 0.00 seconds (0.000 gc):

    Result[] array = ConstantResult.batchResultArray(pop(RefUtil.addRef(data)));
    @Nullable
    Result eval = layer.eval(array);
    assert eval != null;
    TensorList tensorList = Result.getData(eval);
    String temp_18_0016 = tensorList.stream().limit(1).map(x -> {
      String temp_18_0017 = x.prettyPrint();
      x.freeRef();
      return temp_18_0017;
    }).reduce((a, b) -> a + "\n" + b).orElse("");
    tensorList.freeRef();
    return temp_18_0016;

Returns

    [
    	[ [ 0.48883200000000004, -1.2252960000000002, 1.38288 ], [ -0.42259199999999997, -0.040896, 0.109056 ] ],
    	[ [ -2.039104, -0.13504, 0.648192 ], [ -1.965536, 1.3383999999999998, 3.3727679999999998 ] ]
    ]

Limited-Memory BFGS

Next, we apply the same optimization using L-BFGS, which is nearly ideal for purely second-order or quadratic functions.

TrainingTester.java:674 executed in 0.14 seconds (0.000 gc):

    IterativeTrainer iterativeTrainer = new IterativeTrainer(trainable.addRef());
    try {
      iterativeTrainer.setLineSearchFactory(label -> new ArmijoWolfeSearch());
      iterativeTrainer.setOrientation(new LBFGS());
      iterativeTrainer.setMonitor(TrainingTester.getMonitor(history));
      iterativeTrainer.setTimeout(30, TimeUnit.SECONDS);
      iterativeTrainer.setIterationsPerSample(100);
      iterativeTrainer.setMaxIterations(250);
      iterativeTrainer.setTerminateThreshold(0);
      return iterativeTrainer.run();
    } finally {
      iterativeTrainer.freeRef();
    }
Logging
Reset training subject: 2376064647454
Reset training subject: 2376067493551
Adding measurement 709fe001 to history. Total: 0
LBFGS Accumulation History: 1 points
Constructing line search parameters: GD
Non-optimal measurement 55.813811825500295 < 55.813811825500295. Total: 1
th(0)=55.813811825500295;dx=-1.2831394512177885E24
Non-optimal measurement 63.19121688981909 < 55.813811825500295. Total: 1
Armijo: th(2.154434690031884)=63.19121688981909; dx=1.9413514324780693E35 evalInputDelta=-7.377405064318793
Non-optimal measurement 63.23098639954293 < 55.813811825500295. Total: 1
Armijo: th(1.077217345015942)=63.23098639954293; dx=9.70675716232633E34 evalInputDelta=-7.417174574042633
Non-optimal measurement 63.48235344468897 < 55.813811825500295. Total: 1
Armijo: th(0.3590724483386473)=63.48235344468897; dx=3.23558572069009E34 evalInputDelta=-7.668541619188673
Non-optimal measurement 64.39604945211286 < 55.813811825500295. Total: 1
Armijo: th(0.08976811208466183)=64.39604945211286; dx=8.088964300764993E33 evalInputDelta=-8.582237626612567
Non-optimal measurement 64.85357258755599 < 55.813811825500295. Total: 1
Armijo: th(0.017953622416932366)=64.85357258755599; dx=1.6177928591287525E33 evalInputDelta=-9.039760762055693
Non-optimal measurement 64.9846351423839 < 55.813811825500295. Total: 1
Armijo: th(0.002992270402822061)=64.9846351423839; dx=2.6963214212120226E32 evalInputDelta=-9.170823316883599
Non-optimal measurement 65.00935588198757 < 55.813811825500295. Total: 1
Armijo: th(4.2746720040315154E-4)=65.00935588198757; dx=3.8518876348479405E31 evalInputDelta=-9.195544056487279
Non-optimal measurement 65.01302626062918 < 55.813811825500295. Total: 1
Armijo: th(5.343340005039394E-5)=65.01302626062918; dx=4.814858423290657E30 evalInputDelta=-9.19921443512888
Non-optimal measurement 65.01349356710418 < 55.813811825500295. Total: 1
Armijo: th(5.9370444500437714E-6)=65.01349356710418; dx=5.349831312031964E29 evalInputDelta=-9.199681741603882
Non-optimal measurement 65.01354615646004 < 55.813811825500295. Total: 1
Armijo: th(5.937044450043771E-7)=65.01354615646004; dx=5.349716084335708E28 evalInputDelta=-9.199734330959743
Non-optimal measurement 65.01355146871173 < 55.813811825500295. Total: 1
Armijo: th(5.397313136403428E-8)=65.01355146871173; dx=4.862214342363223E27 evalInputDelta=-9.19973964321143
Non-optimal measurement 65.01355195566994 < 55.813811825500295. Total: 1
Armijo: th(4.4977609470028565E-9)=65.01355195566994; dx=4.0401091310545235E26 evalInputDelta=-9.199740130169644
Non-optimal measurement 63.81953820725518 < 55.813811825500295. Total: 1
Armijo: th(3.4598161130791205E-10)=63.81953820725518; dx=2.989499655326694E25 evalInputDelta=-8.005726381754883
Adding measurement 732f40d2 to history. Total: 1
New Minimum: 55.813811825500295 > 29.286518035474426
Armijo: th(2.4712972236279432E-11)=29.286518035474426; dx=1.185477114968172E24 evalInputDelta=26.52729379002587
Adding measurement 66c2db1e to history. Total: 2
New Minimum: 29.286518035474426 > 25.290017461762748
Armijo: th(1.6475314824186289E-12)=25.290017461762748; dx=-1.8047501065048973E23 evalInputDelta=30.523794363737547
Non-optimal measurement 44.396612887629495 < 25.290017461762748. Total: 3
Armijo: th(1.029707176511643E-13)=44.396612887629495; dx=-6.66998638459297E23 evalInputDelta=11.4171989378708
Non-optimal measurement 55.81393281058014 < 25.290017461762748. Total: 3
Armijo: th(6.057101038303783E-15)=55.81393281058014; dx=-1.282593648560979E24 evalInputDelta=-1.2098507984603657E-4
Non-optimal measurement 25.290017461762748 < 25.290017461762748. Total: 3
MIN ALPHA (3.3650561323909904E-16): th(1.6475314824186289E-12)=25.290017461762748
Fitness changed from 55.813811825500295 to 25.290017461762748
Iteration 1 complete. Error: 25.290017461762748 Total: 0.1192; Orientation: 0.0032; Line Search: 0.1078
Non-optimal measurement 25.290017461762748 < 25.290017461762748. Total: 3
LBFGS Accumulation History: 3 points
Non-optimal measurement 25.290017461762748 < 25.290017461762748. Total: 3
th(0)=25.290017461762748;dx=-1.3152605492324672E23
Non-optimal measurement 25.290049796859808 < 25.290017461762748. Total: 3
Armijo: th(2.154434690031884E-15)=25.290049796859808; dx=-1.3149310398340971E23 evalInputDelta=-3.23350970603542E-5
Non-optimal measurement 25.290033628946738 < 25.290017461762748. Total: 3
Armijo: th(1.077217345015942E-15)=25.290033628946738; dx=-1.3150957945332822E23 evalInputDelta=-1.6167183989779232E-5
Non-optimal measurement 25.290017461762748 < 25.290017461762748. Total: 3
MIN ALPHA (3.5907244833864734E-16): th(0.0)=25.290017461762748
Fitness changed from 25.290017461762748 to 25.290017461762748
Static Iteration Total: 0.0217; Orientation: 0.0017; Line Search: 0.0168
Iteration 2 failed. Error: 25.290017461762748
Previous Error: 0.0 -> 25.290017461762748
Optimization terminated 2
Final threshold in iteration 2: 25.290017461762748 (> 0.0) after 0.141s (< 30.000s)

Returns

    25.290017461762748

This training apply resulted in the following configuration:

TrainingTester.java:785 executed in 0.00 seconds (0.000 gc):

    RefList<double[]> state = network.state();
    assert state != null;
    String description = state.stream().map(RefArrays::toString).reduce((a, b) -> a + "\n" + b)
        .orElse("");
    state.freeRef();
    return description;

Returns

    

And regressed input:

TrainingTester.java:797 executed in 0.00 seconds (0.000 gc):

    return RefArrays.stream(RefUtil.addRef(data)).flatMap(x -> {
      return RefArrays.stream(x);
    }).limit(1).map(x -> {
      String temp_18_0015 = x.prettyPrint();
      x.freeRef();
      return temp_18_0015;
    }).reduce((a, b) -> a + "\n" + b).orElse("");

Returns

    [
    	[ [ -0.608, 1.4996864210276633, -1.72 ], [ 0.496, 0.048, -0.12800000000377243 ] ],
    	[ [ 1.208, -0.8851653389358727, -0.384 ], [ -1.028, 0.7000000000000648, 1.764 ] ]
    ]

To produce the following output:

TrainingTester.java:808 executed in 0.00 seconds (0.000 gc):

    Result[] array = ConstantResult.batchResultArray(pop(RefUtil.addRef(data)));
    @Nullable
    Result eval = layer.eval(array);
    assert eval != null;
    TensorList tensorList = Result.getData(eval);
    String temp_18_0016 = tensorList.stream().limit(1).map(x -> {
      String temp_18_0017 = x.prettyPrint();
      x.freeRef();
      return temp_18_0017;
    }).reduce((a, b) -> a + "\n" + b).orElse("");
    tensorList.freeRef();
    return temp_18_0016;

Returns

    [
    	[ [ 0.4608111445682395, -1.1366319344855333, 1.3036104747654145 ], [ -0.42259200000028113, -0.0408960000000272, 0.10905600000328665 ] ],
    	[ [ -1.9838471407850928, 1.4536694758029791, 0.6306269056800295 ], [ -1.9655360000000244, 1.3384000000001404, 3.372768000000042 ] ]
    ]

TrainingTester.java:576 executed in 0.84 seconds (0.000 gc):

    return TestUtil.compare(title + " vs Iteration", runs);
Logging
Plotting range=[0.0, 0.40294912920777315], [2.0, 2.402949129207773]; valueStats=DoubleSummaryStatistics{count=2, sum=50.580035, min=25.290017, average=25.290017, max=25.290017}
Only 1 points for GD
Only 1 points for LBFGS

Returns

Result

TrainingTester.java:579 executed in 0.02 seconds (0.000 gc):

    return TestUtil.compareTime(title + " vs Time", runs);
Logging
Plotting range=[-1.0, 0.40294912920777315], [1.0, 2.402949129207773]; valueStats=DoubleSummaryStatistics{count=2, sum=50.580035, min=25.290017, average=25.290017, max=25.290017}
Only 1 points for GD
Only 0 points for LBFGS

Returns

Result

Results

TrainingTester.java:350 executed in 0.00 seconds (0.000 gc):

    return grid(inputLearning, modelLearning, completeLearning);

Returns

Result

TrainingTester.java:353 executed in 0.00 seconds (0.000 gc):

    return new ComponentResult(null == inputLearning ? null : inputLearning.value,
        null == modelLearning ? null : modelLearning.value, null == completeLearning ? null : completeLearning.value);

Returns

    {"input":{ "LBFGS": { "type": "NonConverged", "value": 25.290017461762748 }, "CjGD": { "type": "NonConverged", "value": NaN }, "GD": { "type": "NonConverged", "value": 25.290017461762748 } }, "model":null, "complete":null}

LayerTests.java:605 executed in 0.00 seconds (0.000 gc):

    throwException(exceptions.addRef());

Results

detailsresult
{"input":{ "LBFGS": { "type": "NonConverged", "value": 25.290017461762748 }, "CjGD": { "type": "NonConverged", "value": NaN }, "GD": { "type": "NonConverged", "value": 25.290017461762748 } }, "model":null, "complete":null}OK
  {
    "result": "OK",
    "performance": {
      "execution_time": "2.918",
      "gc_time": "0.198"
    },
    "created_on": 1587004395058,
    "file_name": "trainingTest",
    "report": {
      "simpleName": "Basic",
      "canonicalName": "com.simiacryptus.mindseye.layers.java.ImgPixelGateLayerTest.Basic",
      "link": "https://github.com/SimiaCryptus/mindseye-java/tree/c9a1867488dc7e77a975f095285b5882c0486db6/src/test/java/com/simiacryptus/mindseye/layers/java/ImgPixelGateLayerTest.java",
      "javaDoc": "The type Basic."
    },
    "training_analysis": {
      "input": {
        "LBFGS": {
          "type": "NonConverged",
          "value": 25.290017461762748
        },
        "CjGD": {
          "type": "NonConverged",
          "value": "NaN"
        },
        "GD": {
          "type": "NonConverged",
          "value": 25.290017461762748
        }
      }
    },
    "archive": "s3://code.simiacrypt.us/tests/com/simiacryptus/mindseye/layers/java/ImgPixelGateLayer/Basic/trainingTest/202004163315",
    "id": "1530f3d9-c341-4513-a71b-ba1f57f6bd65",
    "report_type": "Components",
    "display_name": "Comparative Training",
    "target": {
      "simpleName": "ImgPixelGateLayer",
      "canonicalName": "com.simiacryptus.mindseye.layers.java.ImgPixelGateLayer",
      "link": "https://github.com/SimiaCryptus/mindseye-java/tree/c9a1867488dc7e77a975f095285b5882c0486db6/src/main/java/com/simiacryptus/mindseye/layers/java/ImgPixelGateLayer.java",
      "javaDoc": "The type Img pixel gate layer."
    }
  }