Andrei Costinescu

Researcher profile photo

Projects

Concept Hierarchy Logo
As I am approaching the end of my PhD, my projects all focus on improving and expanding my developed framework for knowledge representation: The Concept Hierarchy.
An introductory video about the framework can be found here.

Project areas include:
  • - User-friendly visualizations of the Concept Hierarchy data
  • - Creating easy interfaces to specify and modify data in the Concept Hierarchy
  • - Writing and evaluating Concept Hierarchies for different applications in households, robotics, hospitals, assemblies, industry, and others
  • - Implementing Robotics and Knowledge Representation features for the Concept Hierarchy
  • - Computer Science topics related to the Concept Hierarchy
If you are interested in working on a project, write me an email including your CV and I will get back to you.
The project descriptions use terms defined in the Concept Hierarchy paper and assume that you are familiar with the paper.
Each project can be simplified or features can be added depending on your project type (Interdisciplinary Project, Guided Reasearch, Bachelor or Master Thesis, etc.).

This page may be outdated or incomplete, meaning that projects may have been assigned, finished or not written here at all. But if you find the listed topics interesting or have ideas of your own, contact me and we'll brainstorm a project suited for all of us.

Generating Concept Hierarchies using offline LLM Agents

Implementing and Evaluating Knowledge Retrieval Methods for the Concept Hierarchy

Web-Interface for Humans to Plan Task Solutions for Robots

Using LLMs as Task Solvers and Humans as Final Checks Before Execution

Extending the Concept Hierarchy to a Programming Language

Developing a Syntax Highlighter, Code Linter, Code Completion and Error Checker for the Concept Hierarchy

Evaluating Code Generation Paradigms on Runtime, Library Size, and Usability

Task Planning for Robots based on Environment Goal Configurations

From Task Demonstration(s?) To Adaptable and Generalized Task Goal

Probabilistic Task Recognition based on Recognized Skills and Goal-Specific Difference Metrics


Generating Concept Hierarchies using offline LLM Agents

As children, we build our models of the world with help from the outside: our parents, books, or even videos. Let's also build models of the world, i.e. Concept Hierarchies, using outside sources. So far, the developed Concept Hierarchies were created by people, doing "manual labor" of thinking what is relevant for a specific application domain. But this relevance already is encoded for example in textbooks (cookbooks, usage or repair manuals, medicine or anatomy books, etc.) or pictograms (IKEA assemblies).
In this project, a method to create Concept Hierarchies from textual input is to be done.

Needed skills:
  • Experience with LLMs is welcome but willingness to learn is important
  • Coding passion
Available data/modules:
  • Syntax definition from the Concept Hierarchy
Your tasks:
  • Setup an offline llm agent that can parse a text and extract the concepts and properties of them.

Implementing and Evaluating Knowledge Retrieval Methods for the Concept Hierarchy

Concept-Hierarchy-similar ontologies, such as Knowrob, can be imported into reasoning tools, such as prolog, that can retrieve knowledge (i.e. data) based on queries. Let's implement a knowledge retrieval method for the Concept Hierarchy as well.
In this project, a method to retrieve knowledge based on queries is to be done. Either inspired by SQL or by prolog, a format to define a query must be defined and then executed in the Concept Hierarchy structure.

Needed skills:
  • Good C++ skills and experience
  • Experience with databases, query optimization, and prolog or other ontologies (or other knowledge retrieval methods) is preferred
  • Coding passion
Available data/modules:
  • Syntax definition from the Concept Hierarchy
Your tasks:
  • Develop a query structure. We can also use the variations as ranges for queries...
  • Optimize execution of the query using caching or query optimization.
  • Prevent infinite-recursion in going through related properties (e.g. i = i.mother.sons[0];)

Web-Interface for Humans to Plan Task Solutions for Robots

Humans are amazing at finding solutions for the most complicated problems; robots struggle to find (optimal) solutions because the search space is incredibly large. Thus, methods to find solutions to tasks must be found. One such idea is to present the current environment and the task goal to the human and let him/her create a solution for that. The solution might not be optimal, but it provides an initial starting point from which further optimization can happen.
In this project, an interface for a human to define the a skill-solution for a task is to be done. In a web-interface, the initial state, the goal state and a canvas for creating and parametrizing Skills should be visualized. Furthermore, once a skill-sequence solution is created, the interface should write the solution to a file, pass that file to a solution checker and respond to the user whether the solution is correct or not.

Needed skills:
  • Experience with web-application development in html/javascript
  • Creating a nice, user-friendly frontend both for the solution definition and for the problem statement.
  • Coding passion
Available data/modules:
  • An existing hierarchical visualization (using d3.js) of Concept Hierarchy data that can be used as starting point for development
Your tasks:
  • Develop a visualization/canvas-editing tool (similar to Scratch, rete.js, etc;) in which Skill-specific blocks can be created, chained and parametrized based on the properties that a Skill defines in the Concept Hierarchy
    • See link for some tool ideas
  • Visualize the initial environment state.
  • Visualize the goal environment variation.
  • Check provided solution and, in case solution is not right, show the reasons why the solution does not satisfy the variation (i.e. show what is missing to solve)

Using LLMs as Task Solvers and Humans as Final Checks Before Execution

Humans are amazing at finding solutions for the most complicated problems; robots struggle to find (optimal) solutions because the search space is incredibly large. Thus, methods to find solutions to tasks must be found. One such idea is to let a large-language model (LLM), that has been trained on human experience, do the planning and create a solution for the task. The LLM-generated solution might not be optimal or even correct, but it provides an initial starting point from which further optimization can happen.
In this project, an LLM-agent to determine skill-sequence for solving a task is to be developed. The llm agent should receive a prompt in which the initial EnvironmentData, the goal Environment-variation and the planning context is contained. The agent should have access to the data defined in the Concept Hierarchy, to know which skills are defined and what effects they have on the objects. In a web-interface, the initial state, the goal state and a canvas for displaying and modifying Skills should be visualized. Once a skill-sequence solution is created and is correct, it is a human that gives the final ok-check before the solution is sent to the robot to be executed.

Needed skills:
  • Experience with LLMs is welcome but willingness to learn is important
  • Experience with web-application development in html/javascript
  • Creating a nice, user-friendly frontend both for the solution definition and for the problem statement.
  • Coding passion
Available data/modules:
  • An existing hierarchical visualization (using d3.js) of Concept Hierarchy data that can be used as starting point for development
  • Check whether an environment satisfied the goal environment variation.
  • Robot execution of a given skill sequence
Your tasks:
  • Develop the offline LLM agent that can parse and understand the data in a given Concept Hierarchy.
  • Create the prompt structure for the task planner.
  • Constrain the output of the LLM to be a skill sequence (each with its correct properties).
  • Use the visualization tool developed in the project above for visualization.
  • Implement dispatch of the solution from the web-interface to the robot.

Extending the Concept Hierarchy to a Programming Language

The Concept Hierarchy can not only be used to classify objects into concepts, define action and skill affordances and specify properties of objects and agents. It can be used to also define behaviors for robots during the execution, to specify the requirements and effects of skills, and to even define how to check whether a skill is active in the environment. The Concept Hierarchy can do all that by modelling Functions, such as Add(arg1: Number, arg2: Number) -> Number, IsObjectCloseToPoint(obj: Instance<Object>, point: Vector<3>) -> Boolean or AddToSequence<T>(seq: Sequence<T>, newElem: T). The Concept Hierarchy only defines the usage interface to these Functions, so that they can be called from the skills' requirements, active checks, effects and behavior tree Functions, but the implementation of the Functions is done in C++, so separated from the Concept Hierarchy.
The Concept Hierarchy also defines ControlFlow-Functions, such as FunctionSequence(fs: Sequence<Function>), Condition(condition: Boolean, ifTrue: Sequence<Function>, ifFalse: Sequence<Function>) or Return<T>(what: T) -> T. However, they are not implemented in C++, but in the python parsing script of the Concept Hierarchy definitions. This is what prevents the Concept Hierarchy to completely become a usable programming language.
In this project, this extension of the Concept Hierarchy to a programming language is to be developed. The goal is to handle ControlFlow-Functions and constructs (such as break, continue, and return) from within Functions (such as FunctionSequences and ForLoops).

Needed skills:
  • Good-to-Excellent skills and experience with C++
  • A good understanding of the control flow mechanisms of modern programming languages.
  • Coding passion
Available data/modules:
  • The Control-Flow Functions left to implement in C++
Your tasks:
  • Implement in C++ mechanisms to break and return values from deeply nested Function-compositions.
  • Implement in C++ mechanism to create variables on demand using the CreateLocalVariable<T> Function.
  • Extension: Implement in C++ mechanism to use global variables.
  • Verify serialization and deserialization for these Functions (i.e. write complete programs in Concept Hierarchy and let them run in C++ without the intermediate python generation and C++ compilation step.).
  • Optional: Create canvas in a web-interface to define the coding-block-programming interface for the Concept Hierarchy to view the definition of skill-effects, -requirements, and behavior trees and to edit and create new Functions for skills.

Developing a Syntax Highlighter, Code Linter, Code Completion and Error Checker for the Concept Hierarchy

The current workflow in defining a Concept Hierarchy (by hand) is to create the definitions in .json files, then run a python script that generates the corresponding C++ code of the definitions from the Concept Hierarchy, compile the resulting C++ code into a library, and then embed it in custom applications, such as task learning, task planning, task recognition or task execution.
In case of errors in the Concept Hierarchy definition, there are two steps between the definition and usage that should catch all errors in the definition. However, during the definition process where, especially as a novice to the framework, it is hard to find errors or know which arguments a Function defines, or the type of those parameters or the type of concept properties, etc. It is also difficult, especially in deep, nested Function compositions, to know whether the current dictionary represents a type creation, a function call or a polymorphic subtype creation...
In this project, coding support tools, such as a syntax highlighter, code linter, code completion, and error checker, are to be developed. The goal is create a plug-in for IDEs (CLion or VSCode) to facilitate development of Concept Hierarchies and catch errors sooner.

Needed skills:
  • Willingness to learn about coding support tools for (custom) programming languages
  • Experience with writing plug-ins is welcome but not necessary
  • Coding passion
Available data/modules:
  • Examples of correct Concept Hierarchies to test on
Your tasks:
  • Write the syntax (formal grammar) definition of a Concept Hierarchy specification.
  • Develop a syntax highlighter of the JSON files in which a Concept Hierarchy is specified.
  • Develop a code completion tool to, e.g., autocomplete Function arguments or ValueDomain-creation arguments.
  • Develop an error checker to warn about not defined types or Functions or ambiguous Function calls.
  • Develop a code linter to, e.g., enforce naming conventions across the Concept Hierarchy definitions.
  • Export the tools to a plug-in that will be embedded in IDEs such as CLion.

Evaluating Code Generation Paradigms on Runtime, Library Size, and Usability

The current workflow in using a Concept Hierarchy in an application is to create the Concept Hierarchy, then run a python script to generate C++ code of the Concept Hierarchy definitions, compile the resulting C++ code into a library, and then embed it in custom applications, such as task learning, task planning, task recognition or task execution.
The current C++-code generation from the python script is not (scientifically) proven to be optimal. In particular, we observe long compilation and linking times and also a large library size for small/medium-sized Concept Hierarchies. This constitutes a possible minus point in comparing the Concept Hierarchy with other similar knowledge representations. Let's make sure that the generated C++ code, which currently is somewhat optimized for runtime, is also optimized for usability and library size.
In this project, an guideline of optimality (based on runtime, library size, usability) of the code generation for the Concept Hierarchy to be developed. Much code is generated for Functions and Concepts which can be optimized with different code-generation strategies.

Needed skills:
  • Good-to-Excellent C++ programming skills and experience
  • It is important to have a very good ability to understand C++ code by reading it.
  • Previous experience in python is not a requirement because the language is easy to learn
  • Coding passion
Available data/modules:
  • Existing script that converts a Concept Hierarchy into C++ code; this is the basis for evaluations and comparisons.
Your tasks:
  • Compare different code generation options based on compilation time: using precompiled headers vs. not using.
  • Compare compilation time of code with of C++20 modules vs. without.
  • Compare library size of exporting different libraries for concepts, valueDomains, functions, etc, vs. compiling one big library.
  • Compare exporting different libraries for concepts, valueDomains, functions, etc. vs. one big library.
  • Compare the effect of generating less code and using template-classes vs. generating more explicit/verbose code on library size.
  • Compare the usability (i.e. link and compilation times) of applications using Concept Hierarchy when compiling & linking agains different generated libraries.
  • Evaluate runtime of the different libraries on fixed action recognition sequences.

Task Planning for Robots based on Environment Goal Configurations

A task is a desired state of the environment. The environment is comprised of physical entities: objects and agents. Each physical entity has properties (Objects have a mass; Containers have a list of contained instances; LiquidContainers most likely contain liquid object instances). The collection of property values of objects and agents, the current time, an environment history, and the dynamic events (i.e. the performed skills) form the state of the environment.
A task defines the desired environment state but not necessarily how to reach that desired state. A plan has to be created that takes into account the abilities of the executing agent and that chooses suitable skills for the agent to execute that transform the current environment state into the desired one defined by the task.
Environment state goals include time-related goals, agent- and object-property goal value specifications, actions and skills that must be done in the environment and combinations of them.
In this project, an implementation a planner that creates a task plan based on an environment goal is to be designed.

Needed skills:
  • good C++ knowledge and experience
  • good algorithms and data structures knowledge
  • knowledge of classical planners and optimization methods
  • a solid mathematical foundation
Available modules:
  • Action definitions modelling changes on entity properties
  • Skill to action associations
  • Agent ability definitions that are composed to form skill executions
Your tasks:
  • Model the differences between the current state and the desired state.
  • Extract the actions that perform these changes.
  • For each action, select a suitable skill that implements that action in the environment and that can be executed by the executing agent in the environment.
  • Optimize skill selection based on heuristics, optimize for distance travelled, energy, time, execution-cost (based on existing behaviour-tree implementation of skill execution), or other cost factors.

From Task Demonstration(s?) To Adaptable and Generalized Task Goal

Humans have the amazing ability of understanding a single demonstration of a task and performing it in different environments and with different objects. This ability is what the project aims to implement for autonomous robots observing a task demonstration.
It can happen that, even for humans, the generalization of a demonstration does not match the intended task of the demonstrator. In this case, communication between the demonstrator and the demonstratee solves ambiguities or clarifies the intended level of abstraction.
In this project, a method to abstract a demonstration into a generalized goal is to be implemented. The initial and final configuration of the environment, as well as the sequence of performed skills is an input to this project. Depending on your skills, the interface to generalize the demonstration into an appropriate Environment-Variations can be:
  • Web-based, in which the demonstrator itself configures the intended task (without demonstration; to serve as a baseline)
  • LLM-based, in which the task is described textually (no demonstration) and the LLM-agent tries to find a Variation (with correct parameters) to describe/represent the task
  • Text-based like a Question and Answer system where the system asks the demonstrator questions and he/she selects the approapriate abstraction
  • Simulation-based: showing what different abstractions/generalizations would look like .
Furthermore, a method to reduce the amount of questions or simulation prompts and to speed-up the generalization to a task is needed. Experience from previous generalizations could be used to guide the prompts for this demonstration.

Needed skills:
  • good C++ knowledge and experience
  • good algorithms and data structures knowledge
  • a solid mathematical foundation
Available modules/data:
  • Digital Twin of the Environment in simulation environment CoppeliaSim (C++ controllable)
  • Demonstrated skill sequence
  • Automatic difference computation between final and initial environments
  • Concept Hierarchy defining which generalizations are possible
Your tasks:
  • Develop method of selecting the important properties of the task (from the differences, some may be unintended/accidental; and some properties could not be modified because they already were in the goal state, but they must also be assessed).
  • Determine which variations (with which parameters) could explain the differences between final and initial environments
  • Develop experience-based method of speeding-up task generalization

Probabilistic Task Recognition based on Recognized Skills and Goal-Specific Difference Metrics

A solution to a task is a sequence of individual skills for an agent to perform. However, different agents in a different environment may require a different sequence to perform the same task. Think of an assembly task: one can either bring all components to the assembly area and then stick the components together to form the result or one can stick together some components into a part-result, and then bring all part-results to the final assembly area and merge the parts together. However, what the different sequences all have in common, is that they transform the environment to be closer to the goal. This "closeness to the goal" is defined by a DifferenceMetric in the Concept Hierarchy.
In this project, a sequence of performed skills should be classified as solving a specific task from a list of predefined ones.

Needed skills:
  • good C++ knowledge and experience
  • good algorithms and data structures knowledge
  • a solid mathematical foundation
Available modules:
  • Skill Recognition module for Skills defined in the Concept Hierarchy (see video)
Your tasks:
  • Create list of tasks (i.e. environment variations) to have as classification targets
  • Create DifferenceMetrics that quantify the difference of an environment from a goal specification
  • Perform classification of the task based on a skill sequence