diff --git a/docs/administration-guide/aggregation/overview.md b/docs/administration-guide/aggregation/overview.md index a133e0d57f..0970fd5903 100644 --- a/docs/administration-guide/aggregation/overview.md +++ b/docs/administration-guide/aggregation/overview.md @@ -25,7 +25,7 @@ met: There are a few different use cases for applying ingest aggregation but it is largely driven by the data you have and the analysis you wish to perform. As an example, say you were expecting multiple connections of the same edge between -two nodes but each instance of the edge may have differing values on its +two entities but each instance of the edge may have differing values on its properties, this could be a place to apply aggregation to sum the values etc. Please see the [ingest aggregation example](ingest-example.md) for some common diff --git a/docs/administration-guide/gaffer-deployment/gaffer-docker/gaffer-images.md b/docs/administration-guide/gaffer-deployment/gaffer-docker/gaffer-images.md index 8397d9ee9c..e1d88f9a7a 100644 --- a/docs/administration-guide/gaffer-deployment/gaffer-docker/gaffer-images.md +++ b/docs/administration-guide/gaffer-deployment/gaffer-docker/gaffer-images.md @@ -1,7 +1,7 @@ # Gaffer Images As demonstrated in the [quickstart](../quickstart.md) it is very simple to start -up a basic in memory gaffer graph using the available Open Container Initiative +up a basic in memory Gaffer graph using the available Open Container Initiative (OCI) images. For large scale graphs with persistent storage you will want to use a different diff --git a/docs/administration-guide/gaffer-deployment/quickstart.md b/docs/administration-guide/gaffer-deployment/quickstart.md index 25ab7c3438..7559f72d94 100644 --- a/docs/administration-guide/gaffer-deployment/quickstart.md +++ b/docs/administration-guide/gaffer-deployment/quickstart.md @@ -12,7 +12,7 @@ docker pull gchq/gaffer-rest:2.0.0 docker run -p 8080:8080 gchq/gaffer-rest:2.0.0 ``` -The Swagger rest API should be available at +The Swagger REST API should be available at [http://127.0.0.1:8080/rest](http://127.0.0.1:8080/rest) to try out. Be aware that as the image uses the map store backend by default, all graph @@ -45,7 +45,7 @@ are more widely used than others, the main types you might want to use are: [Apache Accumulo](https://accumulo.apache.org/). - **Map Store** - In memory JVM store, useful for quick prototyping. - **Proxy Store** - This provides a way to hook into an existing Gaffer store, - when used all operations are delegated to the chosen Gaffer Rest API. + when used all operations are delegated to the chosen Gaffer REST API. - **Federated Store** - Similar to a proxy store however, this will forward all requests to a collection of sub graphs but merge the responses so they appear as one graph. diff --git a/docs/development-guide/example-deployment/project-setup.md b/docs/development-guide/example-deployment/project-setup.md index c6f90856a9..97c6033892 100644 --- a/docs/development-guide/example-deployment/project-setup.md +++ b/docs/development-guide/example-deployment/project-setup.md @@ -2,7 +2,7 @@ This guide will run through the start up and deployment of a basic Gaffer instance. It will cover how to write a basic Gaffer Schema from scratch along with using the pre-made containers to run the -Gaffer rest API and Accumulo based data store. +Gaffer REST API and Accumulo based data store. !!! warning Please be aware that the example is only intended to demonstrate the core Gaffer concepts it is @@ -12,7 +12,7 @@ Gaffer rest API and Accumulo based data store. ## The Example Graph For this basic example we will attempt to recreate the graph in the following diagram consisting of -two nodes (vertexes) with one directed edge between them. +two entities with one directed edge between them. ```mermaid graph LR @@ -104,7 +104,7 @@ that suites a stand alone deployment consisting of the following file structure: 2. Any data files, e.g. CSV, to be made available to the Gaffer container. 3. The main graph config file to set various properties of the overall graph. 4. This file holds the schema outlining the elements in the graph, e.g. the - nodes (aka entities) and edges. + entities and edges. 5. This file defines the different data types in the graph and how they are serialised to Java classes. 6. Config file for additional Gaffer operations and set the class to handle @@ -189,7 +189,7 @@ gaffer.store.operation.declarations=/gaffer/store/operationsDeclarations.json ### Operations Declarations The operation declarations file is a way of enabling additional operations in Gaffer. By default -there are some built in operations already available (the rest API has a get all operations request +there are some built in operations already available (the REST API has a get all operations request to see a list), but its likely you might want to enable others or add your own custom ones. As the example will load its data from a local CSV file we can activate a couple of additional operations using the following file. diff --git a/docs/development-guide/example-deployment/using-the-api.md b/docs/development-guide/example-deployment/using-the-api.md index 80647f5f96..739530351f 100644 --- a/docs/development-guide/example-deployment/using-the-api.md +++ b/docs/development-guide/example-deployment/using-the-api.md @@ -14,7 +14,7 @@ use to load data or query. Gaffer supports various methods of loading data and depending on your use case you can even bypass it all together to load directly into Accumulo. -This example will focus on using the rest API to add the graph elements. In production this method +This example will focus on using the REST API to add the graph elements. In production this method would not be recommended for large volumes of data. However, it is fine for smaller data sets and generally can be done in a few stages outlined in the following diagram. @@ -33,11 +33,11 @@ is a standard `AddElements` operation which takes raw elements JSON as input and graph. !!! info - This is where the schema is used here to validate the elements are correct and conform before - adding. + This is where the schema is used to validate the elements are correct and conform before + adding them to a graph. Using the example again we will demonstrate how we could write an operation chain to load the data -from the neo4j formatted CSV file. +from the Neo4j formatted CSV file. ```json { @@ -66,24 +66,24 @@ via the `operationsDeclarations.json`), which streams the data from the CSV file into the next `GenerateElements` operation. For the generator we have selected the built in `Neo4jCsvElementGenerator` class, this is already -set up to be able to parse a correctly formatted neo4j exported CSV into Gaffer elements via the +set up to be able to parse a correctly formatted Neo4j exported CSV into Gaffer elements via the schema. If you are curious as to what the output of each operation is you can try run a subset of this chain to see how the data changes on each one, the output should be returned back to you in the server response section of the Swagger API. ## Querying Data -Once data is loaded in the graph its now possible to start querying the data to gain insight and +Once data is loaded into the graph it is now possible to start querying the data to gain insight and perform analytics. Querying in Gaffer can get fairly complex but generally simple queries are made up of two parts; a `Get` Operation and a `View`. -Starting with the `Get` operation, say we want to get all nodes and edges based on their ID. To do -this we can use the `GetElements` operation and set the `Seed` to the entity (e.g. node) or edge +Starting with the `Get` operation, say we want to get all entities and edges based on their ID. To do +this we can use the `GetElements` operation and set the `Seed` to the vertex or edge where we want to start the search. To demonstrate this on the example graph we can attempt to get -all entities and edges associated with the `Person` node with ID `v1`. +all entities and edges associated with the `Person` entity with ID `v1`. -The result from this query should return the node associated with the `v1` id along with any edges -on this node, which in this case is just one +The result from this query should return the entity associated with the `v1` id along with any edges +on this vertex, which in this case is just one. === "Input Query" ```json @@ -148,9 +148,9 @@ manipulate the results. In general a `View` has the following possible use cases or excluded. Taking the example from the previous section we will demonstrate general filtering on a query. As -before, the query returns the node `v1` and any edges associated with it. We will now filter it to +before, the query returns the vertex `v1` and any edges associated with it. We will now filter it to include only edges where the weight is over a certain value. In this scenario it is analogous to -asking, *"get all the `Created` edges on node `v1` that have a `weight` greater than 0.3"*. +asking, *"get all the `Created` edges on vertex `v1` that have a `weight` greater than 0.3"*. === "Filter Query" diff --git a/docs/development-guide/example-deployment/writing-the-schema.md b/docs/development-guide/example-deployment/writing-the-schema.md index 101b07feed..2db84fcfe0 100644 --- a/docs/development-guide/example-deployment/writing-the-schema.md +++ b/docs/development-guide/example-deployment/writing-the-schema.md @@ -1,7 +1,7 @@ # Writing the Schema In Gaffer JSON based schemas need to be written upfront to model and understand how to load and -treat the data in the graph. These schemas define all aspects of the nodes and edges in the graph, +treat the data in the graph. These schemas define all aspects of the entities and edges in the graph, and can even be used to automatically do basic analysis or aggregation on queries and ingested data. For reference, this guide will use the same CSV data set from the [project setup](./project-setup.md#the-example-graph) page. @@ -23,7 +23,7 @@ For reference, this guide will use the same CSV data set from the [project setup ## Elements Schema -In Gaffer an element refers to any object in the graph, i.e. your nodes (vertexes) and edges. To set +In Gaffer, an element refers to any object in the graph, i.e. your entities and edges. To set up a graph we need to tell Gaffer what objects are in the graph and the properties they have. The standard way to do this is a JSON config file in the schema directory. The filename can just be called something like `elements.json`, the name is not special as all files under the `schema` @@ -33,7 +33,7 @@ using an appropriate name. As covered in the [Getting Started Schema page](../../user-guide/schema.md), to write a schema you can see that there are some required fields, but largely a schema is highly specific to your input data. -Starting with the `entities` from the example, we can see there will be two distinct types of nodes +Starting with the `entities` from the example, we can see there will be two distinct types of entity in the graph; one representing a `Person` and another for `Software`. These can be added into the schema to give something like the following: @@ -59,7 +59,7 @@ schema to give something like the following: From the basic schema you can see that we have added two entity types for the graph. For now, each `entity` just contains a short description and a type associated to the `vertex` key. The type here is just a placeholder, but it has been named appropriately as it's assumed that we will just use the -string representation of the node's id (this will be defined in the `types.json` later in the +string representation of the entities id (this will be defined in the `types.json` later in the guide). Expanding on the basic schema we will now add the `edges` to the graph. As the example graph is @@ -92,14 +92,14 @@ As discussed in the [user schema guide](../../user-guide/schema.md), edges have the `source` and `destination` fields, these must match the types associated with the vertex field in the relevant entities. From the example, we can see that the source of a `Created` edge is a `Person` so we will use the placeholder type we set as the `vertex` field which is -`id.person.string`. Similarly the destination is a `Software` node so we will use its placeholder of +`id.person.string`. Similarly the destination is a `Software` vertex so we will use its placeholder of `id.software.string`. We must also set whether an edge is directed or not, in this case it is as only a person can create software not the other way around. To set this we will use the `true` type, but note that this is a placeholder and must still be defined in the types.json. -Continuing with the example, the nodes and edges also have some properties associated with each such +Continuing with the example, the entities and edges also have some properties associated with each such as name, age etc. These can also be added to the schema using a properties map to result in the extended schema below. @@ -152,10 +152,10 @@ schema there are some placeholder types added as the values for many of the keys similarly to if you have ever programmed in a strongly typed language, they are essentially the wrapper for the value to encapsulate it. -Now starting with the types for the nodes/vertexes, we used two placeholder types, one for the +Now starting with the types for the entities, we used two placeholder types, one for the `Person` entity and one for the `Software` entity. From the example CSV you can see there is a `_id` -column that uses a string identifier that is used for the ID of the node (this will also be used by -the `edge` to identify the source and destination). We will define a type for each node ID using the +column that uses a string identifier that is used for the ID of the entity (this will also be used by +the `edge` to identify the source and destination). We will define a type for each entity ID using the standard java `String` class to encapsulate it, this leads to a basic `type.json` like the following. @@ -175,17 +175,17 @@ following. ``` The next set of types that need defining are, the ones used for the properties that are attached to -the nodes/entities. Again we need to take a look back at what our input data looks like, in the CSV +the entities. Again we need to take a look back at what our input data looks like, in the CSV file we can see there are three different types that are used for the properties which are analogous to a `String`, an `Integer` and a `Float`. !!! tip - Of course technically, all of these properties could be encapsulated in a string but, assigning - a relevant type allows some additional type specific features when doing things like grouping - and aggregation as it would in traditional programming. + Of course technically, all of these properties could be encapsulated in a string but assigning + a relevant type allows some additional type specific features often used in grouping + and aggregation. If we make a type for each of the possible properties using the standard Java classes we end up with -the following. +the following: ```json { diff --git a/docs/development-guide/introduction.md b/docs/development-guide/introduction.md index d009683c68..62211c7006 100644 --- a/docs/development-guide/introduction.md +++ b/docs/development-guide/introduction.md @@ -15,7 +15,7 @@ Organization](https://github.com/orgs/gchq/repositories). The core Java [Gaffer repo](https://github.com/gchq/Gaffer) contains the main Gaffer product. If you are completely new to Gaffer you can try out our [Road Traffic Demo](https://github.com/gchq/Gaffer/blob/master/example/road-traffic/README.md) or look at our example [deployment guide](../development-guide/example-deployment/project-setup.md). -The [gafferpy repo](https://github.com/gchq/gafferpy) contains a python shell that can execute operations. +The [gafferpy repo](https://github.com/gchq/gafferpy) contains a Python shell that can execute operations. The [gaffer-docker repo](https://github.com/gchq/gaffer-docker) contains the code needed to run Gaffer using Docker or Kubernetes. More information about running a containerised instance of Gaffer can be found in our [adminstration guide](../administration-guide/introduction.md). diff --git a/docs/development-guide/rest-api-sketches.md b/docs/development-guide/rest-api-sketches.md index 0a468f97b0..f08ac38a9f 100644 --- a/docs/development-guide/rest-api-sketches.md +++ b/docs/development-guide/rest-api-sketches.md @@ -28,7 +28,7 @@ object using the `ObjectMapper` module which uses the relevant deserialiser ( ## Creating cardinality values over JSON -When adding or updating a cardinality object over the rest api, you specify the vertex values to add to the sketch. +When adding or updating a cardinality object over the REST API, you specify the vertex values to add to the sketch. This is done by either using the `offers` field with `HyperLogLogPlus`, or the `values` field with `HllSketch`. The HyperLogLog object is then instantiated and updated with the values. The object can then be serialised and stored in the datastore. diff --git a/docs/reference/glossary.md b/docs/reference/glossary.md index 55f735b7cd..cdd8e85e3e 100644 --- a/docs/reference/glossary.md +++ b/docs/reference/glossary.md @@ -16,7 +16,7 @@ hide: | Stores | A Gaffer store represents the backing database responsbile for storing or facilitating access to a graph | | Operations | An operation is an instruction / function that you send to the API to manipulate and query a graph | | Matched vertex | `matchedVertex` is a field added to Edges which are returned by Gaffer queries, stating whether your seeds matched the source or destination | -| Python | A programming language that is used to build applications. Gaffer uses python to interact with the API | +| Python | A programming language that is used to build applications. Gaffer uses Python to interact with the API | | Java | A object oriented programming language used to build software. Gaffer is primarily built in Java | | Database | A database is a collection of organised structured information or data typically stored in a computer system | | API | Application Programming Interface. An API is for one or more services / systems to communicate with each other | diff --git a/docs/reference/operations-guide/accumulo.md b/docs/reference/operations-guide/accumulo.md index faf9b22206..7c3b025dbf 100644 --- a/docs/reference/operations-guide/accumulo.md +++ b/docs/reference/operations-guide/accumulo.md @@ -280,7 +280,7 @@ This operation has been introduced as a replacement to the `GetElementsBetweenSe !!! warning "Currently Unavailable" - The python API for this operation is currently unavailable [see this issue](https://github.com/gchq/gafferpy/issues/14). + The Python API for this operation is currently unavailable [see this issue](https://github.com/gchq/gafferpy/issues/14). Results: diff --git a/docs/user-guide/apis/java-api.md b/docs/user-guide/apis/java-api.md index f08eea4e13..ad6ebca541 100644 --- a/docs/user-guide/apis/java-api.md +++ b/docs/user-guide/apis/java-api.md @@ -1,19 +1,19 @@ # Using the Java API As Gaffer is written in Java there is native support to allow use of all its -public classes. Using Gaffer via the Java interface does differ from the rest +public classes. Using Gaffer via the Java interface does differ from the REST API and `gafferpy` but is fully featured with extensive [Javadocs](https://gchq.github.io/Gaffer/overview-summary.html). However, you -will of course need to be familiar with writing and running Java code in order +will need to be familiar with writing and running Java code in order to utilise this form of the API. ## Querying a Graph -Using Java to query a graph unlike the other APIs requires a reference to a +Using Java to query a graph, unlike the other APIs, requires a reference to a `Graph` object that essentially represents a graph. With the other APIs you would connect directly to a running instance via the -rest interface; however, to do this with Java you would need to configure a +REST interface; however, to do this with Java you would need to configure a `Graph` object with a [proxy store](../../administration-guide/gaffer-stores/proxy-store.md). !!! example "" diff --git a/docs/user-guide/apis/python-api.md b/docs/user-guide/apis/python-api.md index 98931640ca..e9287077e8 100644 --- a/docs/user-guide/apis/python-api.md +++ b/docs/user-guide/apis/python-api.md @@ -1,8 +1,7 @@ # Using the Python API -This section covers an overview of the python API extension for Gaffer to -demonstrate how to get up and running to perform queries from Python code on an -existing running graph. +This section covers an overview of the Python API extension for Gaffer. +Getting this extension up and running allows users to perform queries using Python code on existing graphs. !!! tip Please see the handy introduction to [Python](../gaffer-basics/what-is-python.md) @@ -10,22 +9,22 @@ existing running graph. ## What is the Python Extension? -Commonly referred to as `gafferpy` this is an API to gaffer that provides -similar querying capabilities to the rest API but from Python. Fundamentally it -wraps the rest API to use the same JSON under the hood this means you should be -able to access almost any features or end points available in the main rest API. +Commonly referred to as `gafferpy`, this API provides +similar querying capabilities to the REST API using Python. Fundamentally, it +wraps the REST API allowing users to access almost all the features or end +points available in the main REST API using Python rather than JSON. ## Installation Currently there isn't a release of `gafferpy` on pypi or other pip repository; however, the source code can still be cloned from the [git repository](https://github.com/gchq/gafferpy/tree/main) -and installed via pip. Please see the readme in the `gafferpy` repository for +and installed via pip. Please see the [README](https://github.com/gchq/Gafferpy#readme) in the `gafferpy` repository for full instructions. ## How to Query a Graph To get started with `gafferpy` you will need to import the module and connect to -an existing graph, the connection should be the same address as where the rest +an existing graph. The connection should be the same address as where the REST API is running. ```python @@ -35,12 +34,11 @@ g_connector = gaffer_connector.GafferConnector("http://localhost:8080/rest/lates ``` Once connected you can access and run the same endpoints and operations as you -would via the usual rest API but via their python classes. The endpoints are -accessed via the `GafferConnector` to allow you executing Operation chains to -perform queries on the graph. +would using the usual REST API but via their Python classes. The endpoints are +accessed via the `GafferConnector` where users can then query graphs by executing Operation Chains. !!! note - Some of the features of the full rest API may not be present in + Some of the features of the full REST API may not be present in `gafferpy` so always check the [reference guide](../../reference/intro.md) first. @@ -55,7 +53,7 @@ perform queries on the graph. ``` !!! example "" - An Operation chain can be run using the `execute_operation_chain()` function. + An Operation Chain can be run using the `execute_operation_chain()` function. As an example, the following will get all the elements in a graph then count them. diff --git a/docs/user-guide/apis/rest-api.md b/docs/user-guide/apis/rest-api.md index b71f1709bb..d287b72e9f 100644 --- a/docs/user-guide/apis/rest-api.md +++ b/docs/user-guide/apis/rest-api.md @@ -1,18 +1,18 @@ -# Using the Rest API +# Using the REST API -These sections will cover the usage of the Gaffer rest API to perform queries +These sections will cover the usage of the Gaffer REST API to perform queries and operations on a graph. This guide should cover a lot of the use cases a user may face; however please refer to the [reference guide](../../reference/intro.md) for a full list of what is possible. -## What is the Rest API? +## What is the REST API? When a graph is deployed, a REST (or RESTful) API will be available at a predefined address. This provides an application programming interface (API) that a user or computer can interact with to send and receive data between them and the application. -In Gaffer, the Rest API consists of various predefined HTTP requests known as +In Gaffer, the REST API consists of various predefined HTTP requests known as endpoints that can be used to interact with a running graph instance. These endpoints are accessed either by sending a crafted HTTP request to them e.g. with a tool like [`curl`](https://curl.se/docs/httpscripting.html) or more @@ -20,16 +20,15 @@ commonly by the provided [Swagger UI](https://swagger.io/). ## Querying a Graph -If you wish to just query to get some information about the graph instance such -as what schema it is using or what available Operations it has then there +If you wish to simply run a query which gets some information about the graph instance, such +as what schema is being used or what Operations are available, then there should already be `GET` endpoints to do that. Executing any of these `GET` requests will simply 'get' you some information, however they may be of limited use for a user. -The main endpoint a user will interact with is `/graph/operations/execute`. This -is a `POST` request as it allows you to 'post' some data to it and get a -response back. From here is where you can do querying and run operations on the -graph to and extract data and do analysis with the graph. +The main endpoint users interact with is `/graph/operations/execute`. This +is a `POST` endpoint which allows you to 'post' a query to that endpoint which +then responds back with data. In Gaffer, JSON is the main interchange language which means you can post JSON and get response back in it. diff --git a/docs/user-guide/gaffer-basics/what-is-cardinality.md b/docs/user-guide/gaffer-basics/what-is-cardinality.md index 9205669f95..437094f687 100644 --- a/docs/user-guide/gaffer-basics/what-is-cardinality.md +++ b/docs/user-guide/gaffer-basics/what-is-cardinality.md @@ -35,7 +35,7 @@ Entity. This property is usually added to a specific Entity group that exists so the Cardinality of a given vertex value. An example of the schema changes can be seen in the [advanced properties guide](../../reference/properties-guide/advanced.md#hllsketch). If you are using an Accumulo or Map store as your data store, this should be all that is needed. However, if -you are using a custom store, or a custom rest API, some additional config is needed. +you are using a custom store, or a custom REST API, some additional config is needed. !!! tip It is often useful keep track of cardinality per edge group. This is usually done with an edge @@ -68,9 +68,9 @@ you are using a custom store, or a custom rest API, some additional config is ne gaffer.serialiser.json.modules=uk.gov.gchq.gaffer.sketches.serialisation.json.SketchesJsonModules ``` - If you are using a custom data store, or you not using the standard spring-rest Gaffer rest API, + If you are using a custom data store, or you not using the standard spring-rest Gaffer REST API, then you will also need to ensure that the `sketches-library` dependency is added to your - `pom.xml` for the store and/or rest API. + `pom.xml` for the store and/or REST API. ```xml diff --git a/docs/user-guide/gaffer-basics/what-is-gaffer.md b/docs/user-guide/gaffer-basics/what-is-gaffer.md index b8f6981d30..cf33e34f1e 100644 --- a/docs/user-guide/gaffer-basics/what-is-gaffer.md +++ b/docs/user-guide/gaffer-basics/what-is-gaffer.md @@ -2,7 +2,7 @@ Gaffer is a graph database framework, it acts similarly to an interface providing a graph data structure on top of a chosen storage technology to enable -storage of large graphs and traversal of it's nodes and edges. In a nutshell +storage of large graphs and traversal of it's entities and edges. In a nutshell Gaffer allows you to take data, convert it into a graph, store it in a database and then run queries and analytics on it. diff --git a/docs/user-guide/query/gaffer-syntax/import-export/csv.md b/docs/user-guide/query/gaffer-syntax/import-export/csv.md index 0ad6dd0122..95384c91f9 100644 --- a/docs/user-guide/query/gaffer-syntax/import-export/csv.md +++ b/docs/user-guide/query/gaffer-syntax/import-export/csv.md @@ -39,7 +39,7 @@ gaffer.store.operation.declarations=/gaffer/store/operationsDeclarations.json ## How to Import and Export -You can use the rest API to add the graph elements. In production this method +You can use the REST API to add the graph elements. In production this method would not be recommended for large volumes of data. However, it is fine for smaller data sets and generally can be done in a few stages outlined in the following diagram. diff --git a/docs/user-guide/query/gaffer-syntax/operations.md b/docs/user-guide/query/gaffer-syntax/operations.md index acdf9bb063..a7d666ea2d 100644 --- a/docs/user-guide/query/gaffer-syntax/operations.md +++ b/docs/user-guide/query/gaffer-syntax/operations.md @@ -10,7 +10,7 @@ If you have ever used a shell language such as principal where you have lots of smaller self contained commands (aka Operations) that can work together to form more complicated use cases. -The general structure of an Operation when using JSON via the rest API looks +The general structure of an Operation when using JSON via the REST API looks like the following: ```json @@ -37,9 +37,9 @@ The Operations in Gaffer can be chained together to form complex graph queries. This page will give some general example usage of how you can chain Operations together. -As an example of a simple operation, say we want to get all nodes and edges +As an example of a simple operation, say we want to get all entities and edges based on their ID. To do this we can use the `GetElements` operation and set the -`EntitySeed` to the entity (e.g. node) or edge where we want to start the search. +`EntitySeed` to the vertex or edge where we want to start the search. !!! example ""