Class GreedyCoverageRobot

Inheritance Relationships

Base Type

Class Documentation

class GreedyCoverageRobot : public POMDPCoverageRobot

Subclass of POMDPCoverageRobot which instead uses a greedy policy.

The greedy policy maximises immediate expected reward given a current belief. The immediate expected reward is the probability of the successor cell being free in the next time step, if unvisited (in which case there is no reward.) As its over a belief, not a state, we can’t use the implementation in the bounds classes.

Members: As in superclass

Public Functions

inline GreedyCoverageRobot(const GridCell &currentLoc, int timeBound, int xDim, int yDim, const std::vector<GridCell> &fov, std::shared_ptr<IMacExecutor> exec, std::shared_ptr<IMac> groundTruthIMac = nullptr, const ParameterEstimate &estimationType = ParameterEstimate::posteriorSample)

Constructor calls super constructor.

Parameters:
  • currentLoc – The robot’s current location

  • timeBound – The planning time bound

  • xDim – The x dimension of the map

  • yDim – The y dimension of the map

  • world – The IMacExecutor representing the environment

  • fov – The robot’s FOV as a vector of relative grid cells

  • groundTruthIMac – The ground truth IMac instance (if we don’t want to use BiMac)

  • estimationType – The type of parameter estimation to use for IMac instance for episode