NeuroEvolution using TensorFlow JS - Part 3

Introduction


This tutorial is part 3, if you have not completed NeuroEvolution using TensorFlowJS - Part 1 and NeuroEvolution using TensorFlowJS - Part 2, I highly recommend you do that first. As it will:
  • Explain how to setup the codebase
  • Teach you how to code a basic NeuroEvolution implementation
  • Improve the performance of the NeuroEvolution
  • Implement the ability to save/load models
In this last tutorial we will:
  • Learn how to use simple shape/object/color detection to gather the inputs required to feed into TensorFlow JS (instead of interacting directly with the GameAPI object)
This tutorial is also based on the source code found on my GitHub account here https://github.com/dionbeetson/neuroevolution-experiment.

Let's get started!

Create base class

We need to create a class that will be used instead of GameAPI to gather all of the required inputs from the game every 10ms and then pass them to TensorFlow JS to make the same prediction to jump or not.

Disclaimer:
  • There is already logic in js/ai/Ai.js to initialise this class if you select the checkbox 'Use Object Recognition' in the UI
  • There are a few inputs small items we are not decoupling from the original GameAPI (eg: getScore()), as it was outside the scope of my experiment. But it could be implemented if we really wanted to.
How it works

  1. Every 10ms the game will execute a method to 
    1. Extract an image from the game canvas
    2. Convert the image to greyscale
    3. Using a 10x10 pixel grid create a layout of the level in regards to:
      1. What is whitespace (color=white)
      2. What are obstacles to jump over (color=grey)
      3. Where the player is (color=black)
    4. Look ahead 4 blocks (40 pixels) and determine if there is a block or dip to jump over
    5. Determine the x/y coordinates that can feed into TensorFlowJS
    6. Ai.js will then use the same logic as the previous tutorials to determine to jump or not

Create a file called js/ai/GameImageRecognition.js and then paste in the below code.

js/ai/GameImageRecognition.js

class GameImageRecognition {
}
Lets now create the start() method which is called by default in Ai.js which will start the game, setup the canvas tracker (which is essentially a new hidden canvas DOM element that we paint an image of the game canvas into every 10ms) and use it for detecting shapes/objects/player position etc...

js/ai/GameImageRecognition.js

start() {
  const self = this;

  this.#enableVision = document.querySelector("#ml-enable-vision").checked;

  // @todo - remove dependency on gameAPI object - although outside of scope of this example
  this.#gameApi.start();
  this.setupCanvasTracker();

  // Simulate what happens in the game
  setTimeout(() => {
    self.#isSetup = true;
  }, 100);
}
We will also add in a few other helper functions to get things moving.

js/ai/GameImageRecognition.js

setupCanvasTracker(){
  this.#visualTrackingCanvas = document.createElement("canvas");
  this.#visualTrackingCanvas.setAttribute("width", this.#gameApi.getWidth());
  this.#visualTrackingCanvas.setAttribute("height", this.#gameApi.getHeight());
  this.#visualTrackingCanvas.setAttribute("class", "snapshot-canvas");

  this.#gameApi.getContainer().appendChild(this.#visualTrackingCanvas);
  this.#gameApiCanvas = this.#gameApi.getCanvas();
}

setHighlightSectionAhead(index) {
  // Not required for this demo
  return;
}

isOver() {
  return this.#gameApi.isOver();
}

isSetup() {
  return this.#isSetup;
}
As well as some required class variables

js/ai/GameImageRecognition.js

#gameApi = new GameApi();
#gameApiCanvas;
#isSetup = false;
#visualTrackingCanvas;
#enableVision = false;
Now wire up the UI event handler

js/ai/ui.js

document.querySelector("#ml-use-object-recognition").addEventListener("change", function() {
  if ( this.checked ) {
    neuroEvolution.useImageRecognition = true;
  } else {
    neuroEvolution.useImageRecognition = false;
  }
});
And add in the required setters for useImageRecognition

js/ai/NeuroEvolution.js

set useImageRecognition( useImageRecognition ) {
  this.#useImageRecognition = useImageRecognition;
}
Reload your browser (http://127.0.0.1:8080/), check 'Use Object Recognition' and click 'Start evolution' button. You should see the games begin, but get a lot of game.gameApi.getPlayerY is not a function errors. This is because we need to implement a range of functions to gather input.
Before we do that though, we will add in the logic to extract information from the game canvas every 10ms.

js/ai/GameImageRecognition.js


// Method to extract data from canvas/image and convert it into a readable format for this class to use
extractVisualTrackingData(){
  let data = this.#gameApiCanvas.getContext('2d').getImageData(0, 0, this.#visualTrackingCanvas.width, this.#visualTrackingCanvas.height);
  let dataGrey = this.convertImageToGreyScale(data);

  this.#visualTrackingMap = this.generateVisualTrackingMap(dataGrey, this.#visualTrackingCanvas.width, this.#visualTrackingCanvas.height, this.#visualTrackingMapSize, this.#colors);

  this.updatePlayerPositionFromVisualTrackingMap(this.#visualTrackingMap, this.#colors);

  this.#sectionAhead = this.getSectionAhead(this.#playerX, this.#playerY, 4, this.#visualTrackingMapSize, this.#playerGroundY);
}

// Method to create an object indexed by xposition and yposition with the color as the value, eg: 10x40 = grey
generateVisualTrackingMap(data, width, height, visualTrackingMapSize, colors) {
  let visualTrackingMap = {};
  for( let y = 0; y < height; y+=visualTrackingMapSize ) {
    for( let x = 0; x < width; x+=visualTrackingMapSize ) {
      let col = this.getRGBAFromImageByXY(data, x+5, y+5)
      let key = x+'x'+y;
      visualTrackingMap[key] = colors.background;

      if ( 0 == col[0] ) {
        visualTrackingMap[key] = colors.player;
      }

      if ( col[0] > 210 && col[0] < 240 ) {
        visualTrackingMap[key] = colors.block;
      }
    }
  }

  return visualTrackingMap;
}
These above functions have extra dependencies. Let's add in functionality to convert an image into greyscale, as well as get the RGBA from a specific pixel on that image.

js/ai/GameImageRecognition.js

convertImageToGreyScale(image) {
  let greyImage = new ImageData(image.width, image.height);
  const channels = image.data.length / 4;
  for( let i=0; i < channels; i++ ){
    let i4 = i*4;
    let r = image.data[i4 + 0];
    let g = image.data[i4 + 1];
    let b = image.data[i4 + 2];

    greyImage.data[i4 + 0] = Math.round(0.21*r + 0.72*g + 0.07*b);
    greyImage.data[i4 + 1] = g;
    greyImage.data[i4 + 2] = b;
    greyImage.data[i4 + 3] = 255;
  }

  return greyImage;
}

getRGBAFromImageByXY(imageData, x, y) {
  let rowStart = y * imageData.width * 4;
  let pixelIndex = rowStart + x * 4;

  return [
    imageData.data[pixelIndex],
    imageData.data[pixelIndex+1],
    imageData.data[pixelIndex+2],
    imageData.data[pixelIndex+3],
  ]
}
Add these class variables as well

js/ai/GameImageRecognition.js

#visualTrackingMap = {};
#visualTrackingMapSize = 10;
#sectionAhead = [];
#playerX = 0;
#playerY = 0;
#playerGroundY = 0;
#colors = {
  block: 'grey',
  visionOutline: 'red',
  player: 'black',
  background: 'white'
};
Now we want to add in 3 methods that will be called from Ai.js to detect some of the inputs from the previous tutorials.

js/ai/GameImageRecognition.js

getHeight() {
  return this.#visualTrackingCanvas.height;
}

getWidth() {
  return this.#visualTrackingCanvas.width;
}

getPlayerY() {
  return this.#playerY;
}
Reload your browser (http://127.0.0.1:8080/), check 'Use Object Recognition' and click 'Start evolution' button. Again, you should see the games begin, but now get a lot of game.gameApi.getPlayerX is not a function errors. Ok, we are making progress, let's implement this method.
This method is actually the method we hook into to do all of the processing of the games canvas. Realistically we could have pulled this out into it's own setInterval(), but for the purpose of this demo let's couple it in with getPlayerX() which is called within every think() invocation.

js/ai/GameImageRecognition.js

getPlayerX() {
  this.extractVisualTrackingData();
 
  return this.#playerX;
}
Now add in a method to determine the players x/y position on the canvas (we do this by finding the 10x10 pixel that is color #000000 (black)). Simple, yet effective.

js/ai/GameImageRecognition.js

updatePlayerPositionFromVisualTrackingMap(visualTrackingMap, colors) {
  for (const xy in visualTrackingMap) {
    let value = visualTrackingMap[xy];

    if ( colors.player == value) {
      let position = xy.split('x');
      this.#playerX = parseInt(position[0]);
      this.#playerY = parseInt(position[1]);

      // If we dont have a ground, then set it
      if( 0 == this.#playerGroundY ) {
        this.#playerGroundY = this.#playerY;
      }
    }
  }
}
Next up is a lot of logic to look through visualTrackingMap which stores all the colors of each 10x10 pixel section and determine what lies ahead in relation to the player.

js/ai/GameImageRecognition.js

getSectionAhead(playerX, playerY, aheadIndex, pixelMapSize, playerGroundY){
  let x;
  let y;
  let section;
  let aheadWidth = aheadIndex*10;

  x = Math.ceil(playerX/pixelMapSize) * pixelMapSize + aheadWidth;
  y = Math.ceil(playerY/pixelMapSize) * pixelMapSize;

  section = this.getCollisionSectionAhead(x, y);

  if( false == section ) {
    section = [x, playerGroundY+pixelMapSize, pixelMapSize, pixelMapSize];
  }

  return {
    x: section[0],
    y: section[1],
    width: section[2],
    height: section[3],
  };
}

// Logic to get the xy and width/height of the section ahead that we need to use to determine if we jump over or not
getCollisionSectionAhead(x, y) {

  // Look for drop/dip section ahead we need to jump over
  y = this.#playerGroundY;

  if ( this.isSectionSolid(x, y) ) {
    // Look for taller section ahead we need to jump over
    let xyStart = this.findTopLeftBoundsOfSolidSection(x, y-this.#visualTrackingMapSize);
    let xyEnd = this.findTopRightBoundsOfSolidSection(xyStart[0], xyStart[1], 1);
  
    return [xyStart[0], xyStart[1], xyEnd[0] - x, y - xyEnd[1] + this.#visualTrackingMapSize];
  } else {

    if (  false === this.isSectionSolid(x, y+this.#visualTrackingMapSize) ) {
      let xyStart = this.findBottomLeftBoundsOfSolidSection(x, y);
      let xyEnd = this.findBottomRightBoundsOfSolidSection(xyStart[0], xyStart[1], 1);

      return [xyStart[0], xyEnd[1]+this.#visualTrackingMapSize, xyEnd[0] - x, this.#visualTrackingMapSize];
    }
  }

  return false;
}

isSectionSolid(x, y){
  let section = this.#visualTrackingMap[x + 'x'  +y];
  if ( this.#colors.block == section ) {
    return true;
  }

  return false;
}

findTopLeftBoundsOfSolidSection(x, y) {
  if ( this.isSectionSolid(x, y) ) {
    return this.findTopLeftBoundsOfSolidSection(x, y-this.#visualTrackingMapSize)
  }

  return [x,y+this.#visualTrackingMapSize];
}

findTopRightBoundsOfSolidSection(x, y, counter) {
  if ( counter < 5 && this.isSectionSolid(x, y) ) {
    counter++
    return this.findTopRightBoundsOfSolidSection(x+this.#visualTrackingMapSize, y, counter)
  }

  return [x,y];
}

findBottomLeftBoundsOfSolidSection(x, y) {
  if ( false === this.isSectionSolid(x, y) && y < this.#visualTrackingCanvas.height) {
    return this.findBottomLeftBoundsOfSolidSection(x, y+this.#visualTrackingMapSize)
  }

  return [x,y-this.#visualTrackingMapSize];
}

findBottomRightBoundsOfSolidSection(x, y, counter) {
  if ( counter < 5 && false === this.isSectionSolid(x, y) ) {
    counter++
    return this.findBottomRightBoundsOfSolidSection(x+this.#visualTrackingMapSize, y, counter)
  }

  return [x,y];
}

getSectionFromPlayer(index) {
  return {
    x: this.#sectionAhead.x,
    y: this.#sectionAhead.y,
    width: this.#visualTrackingMapSize,
    height: this.#playerY-this.#sectionAhead.y
  };
}
I will be the first to admit the above logic is not clean/performant and can really be improved. But the purpose of this demo was to prove what is possible - feel free to submit a PR if you want to improve :-)
Getting closer... Reload your browser (http://127.0.0.1:8080/), check 'Use Object Recognition' and click 'Start evolution' button. You should now see a lot of game.gameApi.isPlayerJumping is not a function errors. Let's implement that and a few other methods that are needed regarding the player.

js/ai/GameImageRecognition.js

isPlayerJumping() {
  if( this.#playerY < this.#playerGroundY ) {
    return true;
  }

  return false;
}

getPlayerVelocity() {
  return 0;
}

canPlayerJump() {
  if( this.isPlayerJumping() ) {
    return false;
  }

  return true;
}
Reload your browser (http://127.0.0.1:8080/), check 'Use Object Recognition' and click 'Start evolution' button. You should now see a lot of game.gameApi.setDebugPoints is not a function errors. Let's implement.

js/ai/GameImageRecognition.js

setDebugPoints(debugPoints) {
  this.#gameApi.setDebugPoints(debugPoints);
}
We are actually missing a key method jump(). For the sake of this demo, we are just going to revert to calling the GameAPI. We could simulate this with a bit of trickery by focusing in on the canvas and triggering the 'spacebar' button. But a little too much for this demo.

js/ai/GameImageRecognition.js

jump(){
  // The only way to simulate this is by pressing the spacebar key, but because we have multiple games at once it isn't easily possible.
  this.#gameApi.jump();
}
Reload your browser (http://127.0.0.1:8080/), check 'Use Object Recognition' and click 'Start evolution' button. You should now see the game mostly work, although a few errors will pop up. Add in the below.

js/ai/GameImageRecognition.js

getProgress() {
  return this.#gameApi.getProgress();
}

getScore() {
  return this.#gameApi.getScore();
}

isLevelPassed() {
  return this.#gameApi.isLevelPassed();
}

remove() {
  if( null !== this.#visualTrackingCanvas.parentNode ) {
    this.#visualTrackingCanvas.parentNode.remove();
  }
}

show() {
  if( null !== this.#visualTrackingCanvas.parentNode ) {
    this.#visualTrackingCanvas.parentNode.classList.remove('game-container-hide');
  }
}
Reload your browser (http://127.0.0.1:8080/), check 'Use Object Recognition' and click 'Start evolution' button. Everything should work now, if you let it go it will eventually solve all of the levels.
However... Wouldn't it be nice to see what the ML is actually seeing on each game? Let's add in some debugging info

js/ai/GameImageRecognition.js

drawRectOnCanvas(rect, color) {
  let context = this.#visualTrackingCanvas.getContext('2d');
  context.beginPath();
  context.strokeStyle = color;
  context.lineWidth = "1";
  context.rect(rect.x, rect.y, rect.width, rect.height);
  context.stroke();
}

// Function responsible for drawing what the computer sees, we then use this to get the inputs for tensorflow
drawMachineVision() {
  if( this.#enableVision ) {
    // Clear everything first
    this.#visualTrackingCanvas.getContext('2d').clearRect(0, 0, this.#visualTrackingCanvas.width, this.#visualTrackingCanvas.height);

    // Draw player
    this.drawRectOnCanvas({
      x: this.#playerX,
      y: this.#playerY,
      width: this.#visualTrackingMapSize,
      height: this.#visualTrackingMapSize,
    }, this.#colors.visionOutline);

    // Draw map sections
    for (const xy in this.#visualTrackingMap) {
      let value = this.#visualTrackingMap[xy];

      if ( this.#colors.block == value) {
        let position = xy.split('x');
          this.drawRectOnCanvas({
          x: parseInt(position[0]),
          y: parseInt(position[1]),
          width: this.#visualTrackingMapSize,
          height: this.#visualTrackingMapSize
        }, this.#colors.visionOutline)
      }
    }


    this.drawRectOnCanvas({
      x: this.#sectionAhead.x,
      y: this.#sectionAhead.y,
      width: this.#sectionAhead.width,
      height: this.#sectionAhead.height,
    }, 'blue');
  }
}
Then change the method getPlayerX() to look like this.

js/ai/GameImageRecognition.js

getPlayerX() {
  this.extractVisualTrackingData();
  this.drawMachineVision();
  return this.#playerX;
}
Reload your browser (http://127.0.0.1:8080/), check 'Use Object Recognition', check 'Enable ML vision' and click 'Start evolution' button. You should now see lots of red boxes that highlight what the ML is actually using as inputs. You're browser will most likely struggle, however it will work.

I get consistent results along the lines of:
  • Level 1: Takes 10-25 generations
  • Level 2: Takes 15-40 generations
  • Level 3: Takes 40-400 generations (as it has to learn to jump blocks and gaps).

Congratulations!

I hope this tutorial was useful for you to learn how to use TensorFlowJS to build a NeuroEvolution implementation. Any questions leave them in the comments below, or tweet me on twitter at @dionbeetson

Source code

All source code for this tutorial can be found here https://github.com/dionbeetson/neuroevolution-experiment.

0 comments:

Post a Comment