THE problem of robotic task definition and execution was pioneered by Mason, [1], who defined setpoint constraints where the position, velocity, and/or forces are expressed in one particular task frame for a 6-DOF robot. Later extensions generalized this approach to constraints in i) multiple frames, ii) redundant robots, iii) other sensor spaces such as cameras, and iv) trajectory tracking. Our work extends tasks definition to i) expressions of constraints, with a focus on expressions between geometric entities (distances and angles), in place of explicit set-point constraints, ii) a systematic composition of constraints, iii) runtime monitoring of all constraints (that allows for runtime sequencing of constraint sets via, for example, a Finite State Machine), and iv) formal task descriptions, that can be used by symbolic reasoners to plan and analyse tasks. This means that tasks are seen as ordered groups of constraints to be achieved by the robot’s motion controller, possibly with dierent set of geometric expressions to measure outputs which are not controlled, but are relevant to assess the task evolution. Those monitored expressions may result in events that trigger switching to another ordered group of constraints to execute and monitor. For these task specifications, formal language definitions are introduced in the JSON-schema modeling language.

Introducing Geometric Constraint Expressions Into Robot Constrained Motion Specification and Control

SCIONI, Enea;
2016

Abstract

THE problem of robotic task definition and execution was pioneered by Mason, [1], who defined setpoint constraints where the position, velocity, and/or forces are expressed in one particular task frame for a 6-DOF robot. Later extensions generalized this approach to constraints in i) multiple frames, ii) redundant robots, iii) other sensor spaces such as cameras, and iv) trajectory tracking. Our work extends tasks definition to i) expressions of constraints, with a focus on expressions between geometric entities (distances and angles), in place of explicit set-point constraints, ii) a systematic composition of constraints, iii) runtime monitoring of all constraints (that allows for runtime sequencing of constraint sets via, for example, a Finite State Machine), and iv) formal task descriptions, that can be used by symbolic reasoners to plan and analyse tasks. This means that tasks are seen as ordered groups of constraints to be achieved by the robot’s motion controller, possibly with dierent set of geometric expressions to measure outputs which are not controlled, but are relevant to assess the task evolution. Those monitored expressions may result in events that trigger switching to another ordered group of constraints to execute and monitor. For these task specifications, formal language definitions are introduced in the JSON-schema modeling language.
2016
Borghesan, Gianni; Scioni, Enea; Kheddar, Abderrahmane; Bruyninckx, Herman
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in SFERA sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11392/2334383
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 10
  • ???jsp.display-item.citation.isi??? 9
social impact