1. Overview

EnMasse is an open source project for managed, self-service messaging on [Kubernetes](https://kubernetes.io). EnMasse can run on your own infrastructure or in the cloud, and simplifies running a messaging infrastructure for your organization.

The service admin can deploy and manage messaging infrastructure, while tenants can request messaging resources, both using cloud-native APIs and tools.

1.1. Features

  • Built-in authentication and authorization of clients and identity management

  • Runs on Kubernetes: deploy on-premise or in the cloud

  • Different messaging patterns such as request-response, publish-subscribe and events

  • Decouple operation of infrastructure from configuration and use by applications

EnMasse can be used for many purposes, such as moving your messaging infrastructure to the cloud without depending on a specific cloud provider, building a scalable messaging backbone for IoT, or just as a cloud-ready version of a message broker.

EnMasse can provision different types of messaging depending on your use case. A user can request messaging resources by creating an Address Space.

EnMasse currently supports a standard and a brokered address space type, each with different semantics.

1.2. Standard address space

The standard address space type is the default type in EnMasse, and is focused on scaling in the number of connections and the throughput of the system. It supports AMQP and MQTT protocols, with more to come in the future. This address space type is based on other open source projects such as [Apache ActiveMQ Artemis](https://activemq.apache.org/artemis/) and [Apache Qpid Dispatch Router](https://qpid.apache.org/components/dispatch-router/index.html) and provides elastic scaling of these components. This image illustrates the high-level architecture of the standard address space:

Standard Address Space

1.3. Brokered address space

The brokered address space type is the "classic" message broker in the cloud which supports AMQP, CORE, OpenWire, and MQTT protocols. It supports JMS with transactions, message groups, selectors on queues and so on. These features are useful for building complex messaging patterns. This address space is also more lightweight as it features only a single broker and a management console. This image illustrates the high-level architecture of the brokered address space:

Brokered Address Space

2. Installation

2.1. Installing EnMasse on OpenShift

EnMasse can be installed using automated Ansible playbooks, the deploy.sh script, or the manual steps.

Note
You can invoke the deployment script with -h to view a list of options.
Prerequisites
  • To install EnMasse, the OpenShift client tools are required. You can download the OpenShift Origin client from OpenShift Origin. EnMasse has been tested to work with the latest stable release of the OpenShift Origin Client.

  • An OpenShift cluster is required. If you do not have an OpenShift cluster available, see Minishift for an example of how to run a local instance of OpenShift on your machine.

  • A method to generate certificates is required. This guide uses OpenSSL.

2.1.1. Downloading EnMasse

Procedure

2.1.2. Installing EnMasse using Ansible

Installing EnMasse using Ansible requires creating an inventory file with the variables for configuring the system. Example inventory files can be found in the ansible/inventory folder. For more information about the supported Ansible configuration settings see [ref-ansible-config-settings-messaging].

An example inventory file that enables both the API server and service broker integration:

[enmasse]
localhost ansible_connection=local

[enmasse:vars]
namespace=enmasse
multitenant=true
enable_rbac=false
api_server=true
service_catalog=true
register_api_server=true
keycloak_admin_password=admin
authentication_services=["standard"]
Procedure
  1. (Optional) Create an inventory file.

  2. Run the ansible playbook:

    ansible-playbook -i <inventory file> ansible/playbooks/openshift/deploy_all.yml

2.1.3. Installing EnMasse manually

The manual deployment procedure can be performed on any platform supporting the OpenShift client.

Creating the project for EnMasse
Procedure
  • Create the enmasse project:

    oc new-project enmasse
Deploying authentication services

EnMasse requires at least one authentication service to be deployed. The authentication service can be none (allow all), standard (Keycloak), or external (not managed by EnMasse).

Deploying the none authentication service
Procedure
  1. Create a certificate to use with the none authentication service. For testing purposes, you can create a self-signed certificate:

    mkdir -p none-authservice-cert
    openssl req -new -x509 -batch -nodes -days 11000 -subj "/O=io.enmasse/CN=none-authservice.enmasse.svc.cluster.local" -out none-authservice-cert/tls.crt -keyout none-authservice-cert/tls.key
  2. Create a secret with the none authentication service certificate:

    oc create secret tls none-authservice-cert --cert=none-authservice-cert/tls.crt --key=none-authservice-cert/tls.key
  3. Create the none authentication service:

    oc create -f ./resources/none-authservice/service.yaml
    oc create -f ./resources/none-authservice/deployment.yaml
Deploying the standard authentication service
Procedure
  1. Create a certificate to use with the standard authentication service. For testing purposes, you can create a self-signed certificate:

    mkdir -p standard-authservice-cert
    openssl req -new -x509 -batch -nodes -days 11000 -subj "/O=io.enmasse/CN=standard-authservice.enmasse.svc.cluster.local" -out standard-authservice-cert/tls.crt -keyout standard-authservice-cert/tls.key
  2. Create a secret with the standard authentication service certificate:

    oc create secret tls standard-authservice-cert --cert=standard-authservice-cert/tls.crt --key=standard-authservice-cert/tls.key
  3. Create a secret with Keycloak admin credentials. Choose a password wisely as this user will have complete access over authentication and authorization policies:

    oc create secret generic keycloak-credentials --from-literal=admin.username=admin --from-literal=admin.password=myrandompassword
  4. Grant privileges to the service account:

    oc login -u system:admin
    oc adm policy add-cluster-role-to-user enmasse.io:keycloak-controller system:serviceaccount:enmasse:enmasse-admin
  5. Create the standard authentication service:

    oc create -f ./resources/standard-authservice/service.yaml
    oc create -f ./resources/standard-authservice/keycloak-deployment.yaml
    oc create -f ./resources/standard-authservice/controller-deployment.yaml
    oc create -f ./resources/standard-authservice/pvc.yaml
    oc create -f ./resources/standard-authservice/route.yaml
  6. Create the Keycloak configuration used by the controller and service. To make the standard authservice accessible for the messaging console and the Keycloak operator, you must specify the httpUrl setting. If you are running a local cluster without a public DNS, use the internal service IP address for the host name; otherwise, use the hostname of the external route. To obtain the service IP address:

    oc get service standard-authservice -o jsonpath={.spec.clusterIP}
  7. Or, if you have a public host name:

    oc get route keycloak -o jsonpath={.spec.host}
  8. Create the Keycloak configuration:

    AUTH_HOST=value from one of the previous commands
    AUTH_PORT=8443 if using the service ip, 443 if using the route host
    oc create configmap keycloak-config --from-literal=hostname=standard-authservice --from-literal=port=5671 --from-literal=httpUrl=https://$AUTH_HOST:$AUTH_PORT/auth --from-literal=caSecretName=standard-authservice-cert
Deploying the address space controller

The address space controller is responsible for creating the infrastructure used by address spaces.

Note
To install EnMasse on OpenShift, you must have cluster-admin access to set up the required roles for creating namespaces and managing resources in those namespaces; otherwise, you are restricted to a single address space. For more information about how to deploy without cluster-admin access, which restricts EnMasse to a single address space, see Deploying EnMasse to a single address space.
Procedure
  1. Create a service account for the EnMasse address space controller:

    oc create sa enmasse-admin
  2. Create cluster-wide roles used by the enmasse-admin service account:

    oc login -u system:admin
    oc create -f ./resources/cluster-roles/openshift/address-space-controller.yaml
  3. Grant privileges to the service account:

    oc login -u system:admin
    oc policy add-role-to-user admin system:serviceaccount:enmasse:enmasse-admin
    oc adm policy add-cluster-role-to-user enmasse.io:address-space-controller system:serviceaccount:enmasse:enmasse-admin

    Note: You can log in again as the regular user after this step.

  4. Install the default plan and resource configuration:

    oc create -f ./resources/resource-definitions/resource-definitions.yaml
    oc create -f ./resources/plans/standard-plans.yaml
    oc create -f ./resources/plans/brokered-plans.yaml
  5. Deploy the address space controller:

    oc create -f ./resources/address-space-controller/address-space-definitions.yaml
    oc create -f ./resources/address-space-controller/deployment.yaml
(Optional) Deploying the API server

The API server provides a REST API for creating address spaces and addresses. It can also serve as a Kubernetes API server if it is registered as an APIService.

Note
To install EnMasse on OpenShift, you must have cluster-admin access to set up the required roles for delegating authentication to the Kubernetes master; otherwise, you are restricted to a single address space. For more information about how to deploy without cluster-admin access, which restricts EnMasse to a single address space, see Deploying EnMasse to a single address space.
Procedure
  1. Create a service account for the EnMasse API server:

    oc create sa enmasse-admin
  2. Create cluster-wide roles used by the enmasse-admin service account:

    oc login -u system:admin
    oc create -f ./resources/cluster-roles/api-server.yaml
  3. Grant privileges to the service account:

    oc login -u system:admin
    oc policy add-role-to-user admin system:serviceaccount:enmasse:enmasse-admin
    oc adm policy add-cluster-role-to-user enmasse.io:api-server system:serviceaccount:enmasse:enmasse-admin
    oc adm policy add-cluster-role-to-user system:auth-delegator system:serviceaccount:enmasse:enmasse-admin

    Note: You can log in again as the regular user after this step.

  4. Create a certificate to use with the API server. For testing purposes, you can create a self-signed certificate:

    mkdir -p api-server-cert/
    openssl req -new -x509 -batch -nodes -days 11000 -subj "/O=io.enmasse/CN=api-server.enmasse.svc.cluster.local" -out api-server-cert/tls.crt -keyout api-server-cert/tls.key
  5. Create a secret containing the API server certificate:

    oc create secret tls api-server-cert --cert=api-server-cert/tls.crt --key=api-server-cert/tls.key
  6. Create the API server configuration:

    oc create configmap api-server-config --from-literal=enableRbac=false
  7. Deploy the API server:

    oc create -f ./resources/api-server/deployment.yaml
    oc create -f ./resources/api-server/service.yaml

  8. (Optional) Register the API server to support custom resources:

    oc process -f ./resources/templates/api-service.yaml ENMASSE_NAMESPACE=enmasse | oc create -f -
  9. (Optional) Create routes exposing the API server:

    oc create route passthrough restapi --service=api-server -n enmasse
(Optional) Deploying the service broker

The service broker provides an implementation of the Open Service Broker API that integrates with the Kubernetes Service Catalog. The service broker requires the standard authentication service to be deployed.

Note
To install EnMasse on OpenShift, you must have cluster-admin access to set up the required roles for delegating authentication to the Kubernetes master; otherwise, you are restricted to a single address space. For more information about how to deploy without cluster-admin access, which restricts EnMasse to a single address space, see Deploying EnMasse to a single address space.
Prerequisite
  • The service broker requires the standard authentication service to be deployed.

Procedure
  1. Create a service account for the EnMasse service broker:

    oc create sa enmasse-admin
  2. Create cluster-wide roles used by the enmasse-admin service account:

    oc login -u system:admin
    oc create -f ./resources/cluster-roles/service-broker.yaml
  3. Grant privileges to the service account:

    oc login -u system:admin
    oc policy add-role-to-user admin system:serviceaccount:enmasse:enmasse-admin
    oc adm policy add-cluster-role-to-user enmasse.io:service-broker system:serviceaccount:enmasse:enmasse-admin
    oc adm policy add-cluster-role-to-user system:auth-delegator system:serviceaccount:enmasse:enmasse-admin
    Note
    You can log in again as the regular user after this step.
  4. Create a certificate to use for the service broker. For testing purposes, you can create a self-signed certificate:

    mkdir -p service-broker-cert/
    openssl req -new -x509 -batch -nodes -days 11000 -subj "/O=io.enmasse/CN=service-broker.enmasse.svc.cluster.local" -out service-broker-cert/tls.crt -keyout service-broker-cert/tls.key
  5. Create a secret containing the service broker certificate:

    oc create secret tls service-broker-cert --cert=service-broker-cert/tls.crt --key=service-broker-cert/tls.key
  6. Create a secret containing the service broker secret configuration:

    oc create secret tls service-broker-secret --from-literal=keycloak.username=admin --from-literal=keycloak.password=admin --from-literal=keycloakCa.crt=`oc extract secret/standard-authservice-cert --keys=tls.crt --to=-`
  7. Deploy the service broker:

    oc create -f ./resources/service-broker/deployment.yaml
    oc create -f ./resources/service-broker/service.yaml
  8. To ensure the service broker redirects correctly, you must specify the keycloakUrl setting. If you are running a local cluster without a public DNS, use the internal service IP address for the host name; otherwise, use the host name of the external route. To obtain the service IP address, use this command:

    oc get service standard-authservice -o jsonpath={.spec.clusterIP}

    Or, if you have a public host name, use this command to obtain the host name:

    oc get route keycloak -o jsonpath={.spec.host}
  9. Create the service broker configuration:

    AUTH_HOST=value from one of the previous commands
    AUTH_PORT=8443 if using the service ip, 443 if using the route host
    oc create configmap service-broker-config --from-literal=enableRbac=false --from-literal=keycloakUrl=https://$AUTH_HOST:$AUTH_PORT/auth
  10. Create a secret with a token for the Service Catalog:

    oc create secret generic service-catalog-credentials --from-literal=token=`oc whoami -t`
  11. Register the service broker with the Service Catalog:

    oc process -f ./resources/templates/service-broker.yaml BROKER_NAMESPACE=enmasse | oc create -f -
Deploying EnMasse to a single address space
Procedure
  1. Create service accounts for the EnMasse address space controller and address space:

    oc create sa enmasse-admin
    oc create sa address-space-admin
  2. Grant privileges required for viewing and managing resources:

    oc policy add-role-to-user view system:serviceaccount:enmasse:default
    oc policy add-role-to-user admin system:serviceaccount:enmasse:enmasse-admin
    oc policy add-role-to-user admin system:serviceaccount:enmasse:address-space-admin
  3. Install the default plan and resource configuration:

    oc create -f ./resources/resource-definitions/resource-definitions.yaml
    oc create -f ./resources/plans/standard-plans.yaml
  4. Deploy the default address space:

    oc process -f ./resources/templates/address-space.yaml NAME=default NAMESPACE=enmasse TYPE=standard PLAN=unlimited-standard | oc create -f -
  5. Deploy the template for creating addresses:

    oc create -f ./resources/templates/address.yaml -n enmasse

    You can use this template later for creating addresses from the command line.

  6. Deploy the address space controller:

    oc create -f ./resources/address-space-controller/address-space-definitions.yaml
    oc create -f ./resources/address-space-controller/deployment.yaml

    The deployments required for running EnMasse are now created.

  7. EnMasse is running once all pods in the enmasse namespace are in the Running state:

    oc get pods -n enmasse

2.2. Installing EnMasse on Kubernetes

These steps follow the manual deployment procedure and work on any platform supporting the kubectl command-line client.

To simplify deployment, see the deploy.sh script, which works on Linux and Mac. You can invoke the deployment script with -h to view a list of options.

Prerequisite

To install EnMasse, you must have Kubernetes installed. You can use minikube if you want to install EnMasse on your laptop.

2.2.1. Downloading EnMasse

Procedure

2.2.2. Creating the project for EnMasse

Procedure
  1. Create the enmasse namespace:

    kubectl create namespace enmasse
  2. Set the enmasse namespace as the default namespace:

    kubectl config set-context $(kubectl config current-context) --namespace=enmasse

2.2.3. Deploying authentication services

EnMasse requires at least one authentication service to be deployed. The authentication service can be none (allow all), standard (Keycloak), or external (not managed by EnMasse).

Deploying the none authentication service
Procedure
  1. Create a certificate to use with the none authentication service. For testing purposes, you can create a self-signed certificate:

    mkdir -p none-authservice-cert
    openssl req -new -x509 -batch -nodes -days 11000 -subj "/O=io.enmasse/CN=none-authservice.enmasse.svc.cluster.local" -out none-authservice-cert/tls.crt -keyout none-authservice-cert/tls.key
  2. Create a secret with the none authentication service certificate:

    kubectl create secret tls none-authservice-cert --cert=none-authservice-cert/tls.crt --key=none-authservice-cert/tls.key
  3. Create the none authentication service:

    kubectl create -f ./resources/none-authservice/service.yaml
    kubectl create -f ./resources/none-authservice/deployment.yaml
Deploying the standard authentication service
Procedure
  1. Create a certificate to use with the standard authentication service. For testing purposes, you can create a self-signed certificate:

    mkdir -p standard-authservice-cert
    openssl req -new -x509 -batch -nodes -days 11000 -subj "/O=io.enmasse/CN=standard-authservice.enmasse.svc.cluster.local" -out standard-authservice-cert/tls.crt -keyout standard-authservice-cert/tls.key
  2. Create a secret with the standard authentication service certificate:

    kubectl create secret tls standard-authservice-cert --cert=standard-authservice-cert/tls.crt --key=standard-authservice-cert/tls.key
  3. Create a secret with Keycloak admin credentials. Choose a password wisely as this user will have complete access over authentication and authorization policies:

    kubectl create secret generic keycloak-credentials --from-literal=admin.username=admin --from-literal=admin.password=myrandompassword
  4. Create the standard authentication service:

    kubectl create -f ./resources/standard-authservice/service.yaml
    kubectl create -f ./resources/standard-authservice/keycloak-deployment.yaml
    kubectl create -f ./resources/standard-authservice/controller-deployment.yaml
    kubectl create -f ./resources/standard-authservice/pvc.yaml
    kubectl create -f ./resources/standard-authservice/route.yaml
  5. Create the Keycloak configuration used by the controller and service. To make the standard authservice accessible for the messaging console and the Keycloak operator, you must specify the httpUrl setting. If you are running a local cluster without a public DNS, use the internal service IP address for the host name; otherwise, use the hostname of the external route. To obtain the service IP address:

    kubectl get service standard-authservice -o jsonpath={.spec.clusterIP}
  6. Create the Keycloak configuration:

    AUTH_HOST=value from one of the previous commands
    AUTH_PORT=8443 if using the service ip, 443 if using the route host
    kubectl create configmap keycloak-config --from-literal=hostname=standard-authservice --from-literal=port=5671 --from-literal=httpUrl=https://$AUTH_HOST:$AUTH_PORT/auth --from-literal=caSecretName=standard-authservice-cert

2.2.4. Deploying the address space controller

The address space controller is responsible for creating the infrastructure used by address spaces.

Note
To install EnMasse on OpenShift, you must have cluster-admin access to set up the required roles for creating namespaces and managing resources in those namespaces; otherwise, you are restricted to a single address space. For more information about how to deploy without cluster-admin access, which restricts EnMasse to a single address space, see Deploying EnMasse to a single address space.
Procedure
  1. Create a service account for the EnMasse address space controller:

    kubectl create sa enmasse-admin
  2. Install the default plan and resource configuration:

    kubectl create -f ./resources/resource-definitions/resource-definitions.yaml
    kubectl create -f ./resources/plans/standard-plans.yaml
    kubectl create -f ./resources/plans/brokered-plans.yaml
  3. Deploy the address space controller:

    kubectl create -f ./resources/address-space-controller/address-space-definitions.yaml
    kubectl create -f ./resources/address-space-controller/deployment.yaml

2.2.5. (Optional) Deploying the API server

The API server provides a REST API for creating address spaces and addresses. It can also serve as a Kubernetes API server if it is registered as an APIService.

Note
To install EnMasse on OpenShift, you must have cluster-admin access to set up the required roles for delegating authentication to the Kubernetes master; otherwise, you are restricted to a single address space. For more information about how to deploy without cluster-admin access, which restricts EnMasse to a single address space, see Deploying EnMasse to a single address space.
Procedure
  1. Create a service account for the EnMasse API server:

    kubectl create sa enmasse-admin
  2. Create a certificate to use with the API server. For testing purposes, you can create a self-signed certificate:

    mkdir -p api-server-cert/
    openssl req -new -x509 -batch -nodes -days 11000 -subj "/O=io.enmasse/CN=api-server.enmasse.svc.cluster.local" -out api-server-cert/tls.crt -keyout api-server-cert/tls.key
  3. Create a secret containing the API server certificate:

    kubectl create secret tls api-server-cert --cert=api-server-cert/tls.crt --key=api-server-cert/tls.key
  4. Create the API server configuration:

    kubectl create configmap api-server-config --from-literal=enableRbac=false
  5. Deploy the API server:

    kubectl create -f ./resources/api-server/deployment.yaml
    kubectl create -f ./resources/api-server/service.yaml

2.2.6. (Optional) Deploying the service broker

The service broker provides an implementation of the Open Service Broker API that integrates with the Kubernetes Service Catalog. The service broker requires the standard authentication service to be deployed.

Note
To install EnMasse on OpenShift, you must have cluster-admin access to set up the required roles for delegating authentication to the Kubernetes master; otherwise, you are restricted to a single address space. For more information about how to deploy without cluster-admin access, which restricts EnMasse to a single address space, see Deploying EnMasse to a single address space.
Prerequisite
  • The service broker requires the standard authentication service to be deployed.

Procedure
  1. Create a service account for the EnMasse service broker:

    kubectl create sa enmasse-admin
  2. Create a certificate to use for the service broker. For testing purposes, you can create a self-signed certificate:

    mkdir -p service-broker-cert/
    openssl req -new -x509 -batch -nodes -days 11000 -subj "/O=io.enmasse/CN=service-broker.enmasse.svc.cluster.local" -out service-broker-cert/tls.crt -keyout service-broker-cert/tls.key
  3. Create a secret containing the service broker certificate:

    kubectl create secret tls service-broker-cert --cert=service-broker-cert/tls.crt --key=service-broker-cert/tls.key
  4. Create a secret containing the service broker secret configuration:

    kubectl create secret tls service-broker-secret --from-literal=keycloak.username=admin --from-literal=keycloak.password=admin --from-literal=keycloakCa.crt=`kubectl extract secret/standard-authservice-cert --keys=tls.crt --to=-`
  5. Deploy the service broker:

    kubectl create -f ./resources/service-broker/deployment.yaml
    kubectl create -f ./resources/service-broker/service.yaml
  6. To ensure the service broker redirects correctly, you must specify the keycloakUrl setting. If you are running a local cluster without a public DNS, use the internal service IP address for the host name; otherwise, use the host name of the external route. To obtain the service IP address, use this command:

    kubectl get service standard-authservice -o jsonpath={.spec.clusterIP}
  7. Create the service broker configuration:

    AUTH_HOST=value from one of the previous commands
    AUTH_PORT=8443 if using the service ip, 443 if using the route host
    kubectl create configmap service-broker-config --from-literal=enableRbac=false --from-literal=keycloakUrl=https://$AUTH_HOST:$AUTH_PORT/auth
  8. Create a secret with a token for the Service Catalog:

    kubectl create secret generic service-catalog-credentials --from-literal=token=`kubectl whoami -t`
  9. Register the service broker with the Service Catalog:

    kubectl process -f ./resources/templates/service-broker.yaml BROKER_NAMESPACE=enmasse | oc create -f -

3. Address space and address plans

3.1. Plans

Plans are used to configure quotas and control the resources consumed by a particular deployment. Plans are configured by the EnMasse service operator and are selected when creating an address space and an address.

Resource definitions are a description of resources referenced by the plans, which can be configured with a template and parameters to be used when instantiating the template.

By default, EnMasse comes with a set of plans and resource definitions that are sufficient for most use cases.

3.1.1. Address space plans

Address space plans specify the quota available to a given address space. By default, EnMasse includes an unlimited quota plan for both the standard and brokered address spaces.

Plans are configured as ConfigMaps. Following is an example plan for the standard address space:

apiVersion: v1
kind: ConfigMap
metadata:
  name: restrictive-plan
  labels:
    type: address-space-plan
data:
  definition: |-
    {
      "apiVersion": "enmasse.io/v1",
      "kind": "AddressSpacePlan",
      "metadata": {
        "name": "restrictive-plan"
        "annotations": {
          "defined-by": "standard-space"
        }
      },
      "displayName": "Restrictive Plan",
      "displayOrder": 0,
      "shortDescription": "A plan with restrictive quotas",
      "longDescription": "A plan with restrictive quotas for the standard address space",
      "uuid": "74b9a40e-117e-11e8-b4e1-507b9def37d9",
      "addressSpaceType": "standard",
      "addressPlans": [
        "small-queue",
        "small-anycast"
      ],
      "resources": [
        {
          "name": "router",
          "min": "0.0",
          "max": "2.0"
        },
        {
          "name": "broker",
          "min": "0.0",
          "max": "2.0"
        },
        {
          "name": "aggregate",
          "min": "0.0",
          "max": "2.0"
        }
      ]
    }

The following fields are required:

  • metadata.name

  • resources

  • addressPlans

  • addressSpaceType

The other fields are used by the EnMasse console UI. Note the annotation defined-by, which points to a resource definition describing the infrastructure that must be deployed when an address space using this plan is created.

3.1.2. Address plans

Address plans specify the expected resource usage of a given address. The sum of the resource usage for all resource types determines the amount of infrastructure provisioned for an address space. A single router and broker pod has a maximum usage of one. If a new address requires additional resources and the resource consumption is within the address space plan limits, a new pod will be created automatically to handle the increased load.

In the [con-address-space-plans-messaging] section, the address space plan references two address plans: small-queue and small-anycast. These address plans are stored as ConfigMaps and are defined as follows:

apiVersion: v1
kind: ConfigMap
metadata:
  name: small-queue-plan
  labels:
    type: address-plan
data:
  definition: |-
    {
      "apiVersion": "enmasse.io/v1",
      "kind": "AddressPlan",
      "metadata": {
        "name": "small-queue"
      },
      "displayName": "Small queue plan",
      "displayOrder": 0,
      "shortDescription": "A plan for small queues",
      "longDescription": "A plan for small queues that consume little resources",
      "uuid": "98feabb6-1183-11e8-a769-507b9def37d9",
      "addressType": "queue",
      "requiredResources": [
        {
          "name": "router",
          "credit": 0.2
        },
        {
          "name": "broker",
          "credit": 0.3
        }
      ]
    }

The following fields are required:

  • metadata.name

  • requiredResources

  • addressType

A single router can support five instances of addresses and broker can support three instances of addresses with this plan. If the number of addresses with this plan increases to four, another broker is created. If it increases further to six, another router is created as well.

Note, however, that although the address space plan allows two routers and two brokers to be deployed, it only allows two pods to be deployed in total. This means that the address space is restricted to three addresses with the small-queue plan.

The small-anycast plan does not consume any broker resources, and can provision two routers at the expense of not being able to create any brokers:

apiVersion: v1
kind: ConfigMap
metadata:
  name: small-anycast-plan
  labels:
    type: address-plan
data:
  definition: |-
    {
      "apiVersion": "enmasse.io/v1",
      "kind": "AddressPlan",
      "metadata": {
        "name": "small-anycast"
      },
      "displayName": "Small anycast plan",
      "displayOrder": 0,
      "shortDescription": "A plan for small anycast addresses",
      "longDescription": "A plan for small anycast addresses that consume little resources",
      "uuid": "cb61f440-1184-11e8-adda-507b9def37d9",
      "addressType": "anycast",
      "requiredResources": [
        {
          "name": "router",
          "credit": 0.2
        }
      ]
    }

With this plan, up to 10 addresses can be created.

3.1.3. Resource definitions

A resource describes a template along with a set of parameters. The resource definition is referenced from the plans. At present, only three resource definitions are supported:

  • router

  • broker

  • broker-topic

Resource definitions with other names will not work with EnMasse. It is, however, possible to modify these resource definitions to change the template and template parameters used when instantiating the infrastructure. For instance, the following configuration map that increases the memory available for brokers can replace the default one provided by EnMasse:

apiVersion: v1
kind: ConfigMap
metadata:
  name: broker-resource
  labels:
    type: resource-definition
data:
  definition: |-
    {
      "apiVersion": "enmasse.io/v1",
      "kind": "ResourceDefinition",
      "metadata": {
        "name": "broker"
      },
      "template": "queue-persisted",
      "parameters": [
        {
          "name": "BROKER_MEMORY_LIMIT",
          "value": "2Gi"
        }
      ]
    }

4. Monitoring

4.1. Monitoring EnMasse

EnMasse comes with addons for Prometheus and Grafana for monitoring the service. Cluster-admin privileges is required for Prometheus to monitor pods in the cluster.

4.1.1. Deploying Prometheus

Procedure
  1. Create Prometheus deployment

    kubectl create -f ./resources/prometheus/prometheus.yaml -n enmasse

4.1.2. Deploying Grafana

Procedure
  1. Create Grafana deployment

    kubectl create -f ./resources/grafana/grafana.yaml -n enmasse
  2. Expose Grafana service

    kubectl expose service grafana

Grafana accepts the username 'admin' and password 'admin' by default. See the Prometheus Documentation on how to connect Grafana to Prometheus. Use prometheus.enmasse.svc.cluster.local as the prometheus hostname.

5. Managing address spaces and addresses

5.1. Address Model

The EnMasse address model involves three distinct concepts:

  • types of address spaces

  • types of addresses within each address space

  • available plans

5.1.1. Address Space

An address space is a group of addresses that can be accessed through a single connection (per protocol). This means that clients connected to the endpoints of an address space can send messages to or receive messages from any address it is authorized to send messages to or receive messages from within that address space. An address space can support multiple protocols, which is defined by the address space type.

5.1.2. Address

An address is part of an address space and represents a destination used for sending and receiving messages. An address has a type, which defines the semantics of sending messages to and receiving messages from that address.

5.1.3. Plans

Both address spaces and addresses can be restricted by a plan, which enforces a limit on resource usage across multiple dimensions. Note that the set of plans currently offered might be extended in the future, and the constraints imposed by a plan within an address space might change as operational experience is gained.

Address Space Plans

Each address space has a plan that restricts the aggregated resource usage within an address space. Each address space type can translate the plan into a set of restrictions, for example, the ability to scale up to five routers or to create up to 10 addresses. These restrictions are documented within each address space.

Address Plans

The usage of each address is also constrained by a plan. Each address type translates the plan into a set of restrictions, for example, up to five consumers or up to 100 messages per hour. These restrictions are documented within each address type.

5.1.4. Standard Address Space

The default address space in EnMasse is the standard address space and it consists of an AMQP router network in combination with attachable storage units. The implementation of a storage unit is hidden from the client and the routers with a well-defined API. This address space type is appropriate when you have many connections and addresses. However, it has the following limitations: no transaction support, no message ordering, no selectors on queues, and no message groups.

Clients connect and send and receive messages in this address space using the AMQP or MQTT protocols. Note that MQTT does not support qos2 or retained messages.

Address Types

The standard address space supports five address types:

  • queue

  • topic

  • anycast

  • multicast

  • subscription

Queue

The queue address type is a store-and-forward queue. This address type is appropriate for implementing a distributed work queue, handling traffic bursts, and other use cases where you want to decouple the producer and consumer. A queue can be sharded across multiple storage units; however, message order is no longer guaranteed.

Topic

The topic address type supports the publish-subscribe messaging pattern where you have 1..N producers and 1..M consumers. Each message published to a topic address is forwarded to all subscribers for that address. A subscriber can also be durable, in which case messages are kept until the subscriber has acknowledged them.

Anycast

The anycast address type is a scalable direct address for sending messages to one consumer. Messages sent to an anycast address are not stored, but forwarded directly to the consumer. This method makes this address type ideal for request-reply (RPC) uses or even work distribution. This is the cheapest address type as it does not require any persistence.

Multicast

The multicast address type is a scalable direct address for sending messages to multiple consumers. Messages sent to a multicast address are forwarded to all consumers receiving messages on that address. It is important to note that only pre-settled messages can be sent to multicast addresses, as message acknowledgements from consumers are not propagated to producers.

Subscription

The subscription address type allows a subscription to be created for a topic which will hold messages published to the topic even if the subscriber is not attached. The subscription is accessed by the consumer using <topic-address>::<subscription-address>. For example for a subscription 'mysub' on a topic 'mytopic', the consumer should consume from address 'mytopic::mysub'.

5.1.5. Brokered Address Space

The brokered address space is designed to support broker-specific features, at the cost of limited scale in terms of the number of connections and addresses. This address space supports JMS transactions, message groups, and so on.

Clients connect and send and receive messages in this address space using the AMQP protocol.

Address types
  • queue

  • topic

Queue

The queue address type is a store-and-forward queue. This address type is appropriate for implementing a distributed work queue, handling traffic bursts, and other use cases where you want to decouple the producer and consumer. A queue in the brokered address spaces supports selectors, message groups, transactions, and other JMS features.

Topic

The topic address type supports the publish-subscribe messaging pattern where you have 1..N producers and 1..M consumers. Each message published to a topic address is forwarded to all subscribers for that address. A subscriber can also be durable, in which case messages are kept until the subscriber has acknowledged them.

5.2. Configuring EnMasse using the command line

EnMasse can be configured to support manipulating address spaces and addresses using the Kubernetes and OpenShift command line tools. See [oc-register-apiservice] for how to setup the API server to support this if you have not already configured your installation with it.

5.2.1. Creating an Address Space

Procedure
  1. Save the following YAML data to a file 'space.yaml':

    apiVersion: enmasse.io/v1alpha1
    kind: AddressSpace
    metadata:
        name: myspace
    spec:
        type: standard
        plan: unlimited-standard
  2. Create the address space using the command line (replace oc with kubectl if using Kubernetes):

    oc create -f space.yaml
  3. You should now be able too list address spaces:

    oc get addressspaces

5.2.2. Creating an Address

Procedure
  1. Save the following YAML data to a file 'address.yaml' (NOTE: Prefixing the name with the address space name is required to ensure addresses from different address spaces do not collide):

    apiVersion: enmasse.io/v1alpha1
    kind: Address
    metadata:
        name: myspace.myqueue
    spec:
        address: myqueue
        type: queue
        plan: pooled-queue
  2. Create the address using the command line (replace oc with kubectl if using Kubernetes):

    oc create -f address.yaml
  3. You should now be able too list addresses:

    oc get addresses

5.3. Configuring EnMasse using a REST API

EnMasse provides an API that can be used for configuring address spaces and addresses within those address spaces. Clients can be configured to authenticate using RBAC.

All API URIs are namespaced. This means that address spaces are scoped within a particular namespace. Addresses are scoped within an address space. This means that an address space in address space A may have the same name as an address in address space B.

Likewise, an address space in namespace A can have the same name as an address space in namespace B.

5.3.1. Creating an Address Space

Procedure
  1. Save the following JSON data to a file 'space.json':

    {
        "apiVersion": "enmasse.io/v1alpha1",
        "kind": "AddressSpace",
        "metadata": {
            "name": "myspace"
        },
        "spec": {
            "type": "standard",
            "plan": "unlimited-standard"
        }
    }
  2. POST the address space definition to the API using curl:

    TOKEN=`oc whoami -t`
    curl -X POST -T space.json -H "Content-Type: application/json" -H "Authorization: Bearer $TOKEN" -k https://$(oc get route restapi -o jsonpath='{.spec.host}')/apis/enmasse.io/v1alpha1/namespaces/[:namespace]/addressspaces

    This will create the infrastructure required for that address space. Replace the namespace with the namespace of the application requesting the address space to be created. Starting up the address space will take a while, usually depending on how fast it is able to download the Docker images for the various components.

5.3.2. Viewing Address Space Status

Procedure
  • You can use the API to check the status of the address space:

    TOKEN=`oc whoami -t`
    curl -k -H "Authorization: Bearer $TOKEN" https://$(oc get route restapi -o jsonpath='{.spec.host}')/apis/enmasse.io/v1alpha1/namespaces/[:namespace]/addressspaces/myspace

    You can consider the address space to be ready to use when status.isReady is true in the returned JSON object.

5.3.3. Creating Addresses

Procedure
  1. To create addresses in the standard address space, save the address definition to a file:

    {
      "apiVersion": "enmasse.io/v1alpha1",
      "kind": "Address",
      "metadata": {
          "addressSpace": "myspace"
      },
      "spec": {
        "address": "myqueue",
        "type": "queue",
        "plan": "pooled-queue"
      }
    }
  2. You can then create the address using the following API. Replace the namespace with the same as for the address space:

    TOKEN=`oc whoami -t`
    curl -X POST -T address.json -H "content-type: application/json" -H "Authorization: Bearer $TOKEN" -k https://$(oc get route restapi -o jsonpath='{.spec.host}')/apis/enmasse.io/v1alpha1/namespaces/[:namespace]/addressspaces/myspace/addresses

5.3.4. Viewing Configured Addresses

Procedure
  • To check which addresses are configured:

    curl -k https://$(oc get route restapi -o jsonpath='{.spec.host}')/apis/enmasse.io/v1alpha1/namespaces/[:namespace]/addressspaces/myspace/addresses

    The addresses are ready to be used by messaging clients once the status.isReady field of each address is set to true.

6. Connecting applications to EnMasse

6.1. Connecting to EnMasse

To connect to the messaging service from outside the openshift or kubernetes cluster, TLS must be used with SNI set to specify the fully qualified hostname for the address-space. The port used is 443.

The messaging protocols supported depends on the type of the address-space.

6.1.1. Client Examples

Simple examples are shown here for the following clients:

  • Apache Qpid Proton Python

  • Apache Qpid JMS

  • Rhea JavaScript Client

  • Apache Qpid Proton C++

  • AMQP.Net Lite

These all assume you have created an address of type 'queue' named 'myqueue'.

Apache Qpid Proton Python
from __future__ import print_function, unicode_literals
from proton import Message
from proton.handlers import MessagingHandler
from proton.reactor import Container

class HelloWorld(MessagingHandler):
    def __init__(self, server, address):
        super(HelloWorld, self).__init__()
        self.server = server
        self.address = address

    def on_start(self, event):
        conn = event.container.connect(self.server)
        event.container.create_receiver(conn, self.address)
        event.container.create_sender(conn, self.address)

    def on_sendable(self, event):
        event.sender.send(Message(body="Hello World!"))
        event.sender.close()

    def on_message(self, event):
        print(event.message.body)
        event.connection.close()

Container(HelloWorld("amqps://<messaging-route-hostname>:443", "myqueue")).run()
Apache Qpid JMS
package org.apache.qpid.jms.example;

import javax.jms.Connection;
import javax.jms.ConnectionFactory;
import javax.jms.DeliveryMode;
import javax.jms.Destination;
import javax.jms.ExceptionListener;
import javax.jms.JMSException;
import javax.jms.Message;
import javax.jms.MessageConsumer;
import javax.jms.MessageProducer;
import javax.jms.Session;
import javax.jms.TextMessage;
import javax.naming.Context;
import javax.naming.InitialContext;

public class HelloWorld {
    public static void main(String[] args) throws Exception {
        try {
            // The configuration for the Qpid InitialContextFactory has been supplied in
            // a jndi.properties file in the classpath, which results in it being picked
            // up automatically by the InitialContext constructor.
            Context context = new InitialContext();

            ConnectionFactory factory = (ConnectionFactory) context.lookup("myFactoryLookup");
            Destination queue = (Destination) context.lookup("myQueueLookup");

            Connection connection = factory.createConnection(System.getProperty("USER"), System.getProperty("PASSWORD"));
            connection.setExceptionListener(new MyExceptionListener());
            connection.start();

            Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);

            MessageProducer messageProducer = session.createProducer(queue);
            MessageConsumer messageConsumer = session.createConsumer(queue);

            TextMessage message = session.createTextMessage("Hello world!");
            messageProducer.send(message, DeliveryMode.NON_PERSISTENT, Message.DEFAULT_PRIORITY, Message.DEFAULT_TIME_TO_LIVE);
            TextMessage receivedMessage = (TextMessage) messageConsumer.receive(2000L);

            if (receivedMessage != null) {
                System.out.println(receivedMessage.getText());
            } else {
                System.out.println("No message received within the given timeout!");
            }

            connection.close();
        } catch (Exception exp) {
            System.out.println("Caught exception, exiting.");
            exp.printStackTrace(System.out);
            System.exit(1);
        }
    }

    private static class MyExceptionListener implements ExceptionListener {
        @Override
        public void onException(JMSException exception) {
            System.out.println("Connection ExceptionListener fired, exiting.");
            exception.printStackTrace(System.out);
            System.exit(1);
        }
    }
}

with jndi.properties:

connectionfactory.myFactoryLookup = amqps://<messaging-route-hostname>:443?transport.trustAll=true&transport.verifyHost=false
queue.myQueueLookup = myqueue
Rhea JavaScript Client
var container = require('rhea');
container.on('connection_open', function (context) {
    context.connection.open_receiver('myqueue');
    context.connection.open_sender('myqueue');
});
container.on('message', function (context) {
    console.log(context.message.body);
    context.connection.close();
});
container.on('sendable', function (context) {
    context.sender.send({body:'Hello World!'});
    context.sender.detach();
});
container.connect({username: '<username>', password: '<password>', port:443, host:'<messaging-route-hostname>', transport:'tls', rejectUnauthorized:false});
Rhea JavaScript Client using WebSockets
var container = require('rhea');
var WebSocket = require('ws');

container.on('connection_open', function (context) {
    context.connection.open_receiver('myqueue');
    context.connection.open_sender('myqueue');
});
container.on('message', function (context) {
    console.log(context.message.body);
    context.connection.close();
});
container.on('sendable', function (context) {
    context.sender.send({body:'Hello World!'});
    context.sender.detach();
});

var ws = container.websocket_connect(WebSocket);
container.connect({username: '<username>', password: '<password>', connection_details: ws("wss://<messaging-route-hostname>:443", ["binary"], {rejectUnauthorized: false})});
Apache Qpid Proton C++

The C client has equivalent simple_recv and simple_send examples with the same options as python. However, the C library does not perform the same level of processing on the URL; in particular it won’t take amqps:// to imply using TLS, so the example needs to be modified as follows:

#include <proton/connection.hpp>
#include <proton/container.hpp>
#include <proton/default_container.hpp>
#include <proton/delivery.hpp>
#include <proton/message.hpp>
#include <proton/messaging_handler.hpp>
#include <proton/ssl.hpp>
#include <proton/thread_safe.hpp>
#include <proton/tracker.hpp>
#include <proton/url.hpp>

#include <iostream>

#include "fake_cpp11.hpp"

class hello_world : public proton::messaging_handler {
  private:
    proton::url url;

  public:
    hello_world(const std::string& u) : url(u) {}

    void on_container_start(proton::container& c) OVERRIDE {
        proton::connection_options co;
        co.ssl_client_options(proton::ssl_client_options());
        c.client_connection_options(co);
        c.connect(url);
    }

    void on_connection_open(proton::connection& c) OVERRIDE {
        c.open_receiver(url.path());
        c.open_sender(url.path());
    }

    void on_sendable(proton::sender &s) OVERRIDE {
        proton::message m("Hello World!");
        s.send(m);
        s.close();
    }

    void on_message(proton::delivery &d, proton::message &m) OVERRIDE {
        std::cout << m.body() << std::endl;
        d.connection().close();
    }
};

int main(int argc, char **argv) {
    try {
        std::string url = argc > 1 ? argv[1] : "<messaging-route-hostname>:443/myqueue";

        hello_world hw(url);
        proton::default_container(hw).run();

        return 0;
    } catch (const std::exception& e) {
        std::cerr << e.what() << std::endl;
    }

    return 1;
}
AMQP.Net Lite
using System;
using Amqp;

namespace Test
{
    public class Program
    {
        public static void Main(string[] args)
        {
            String url = (args.Length > 0) ? args[0] : "amqps://<messaging-route-hostname>:443";
            String address = (args.Length > 1) ? args[1] : "myqueue";

            Connection.DisableServerCertValidation = true;
            Connection connection = new Connection(new Address(url));
            Session session = new Session(connection);
            SenderLink sender = new SenderLink(session, "test-sender", address);

            Message messageSent = new Message("Test Message");
            sender.Send(messageSent);

            ReceiverLink receiver = new ReceiverLink(session, "test-receiver", address);
            Message messageReceived = receiver.Receive(TimeSpan.FromSeconds(2));
            Console.WriteLine(messageReceived.Body);
            receiver.Accept(messageReceived);

            sender.Close();
            receiver.Close();
            session.Close();
            connection.Close();
        }
    }
}

Appendix A: Quick start guides

A.1. EnMasse on OpenShift

This guide will walk through the process of setting up EnMasse on OpenShift with clients for sending and receiving messages. The guide will deploy EnMasse in a single tenant mode and with the none authentication service.

Prerequisites
  • To install EnMasse, the OpenShift client tools are required. You can download the OpenShift Origin client from OpenShift Origin. EnMasse has been tested to work with the latest stable release of the OpenShift Origin Client.

  • An OpenShift cluster is required. If you do not have an OpenShift cluster available, see Minishift for an example of how to run a local instance of OpenShift on your machine.

  • A method to generate certificates is required. This guide uses OpenSSL.

Installing EnMasse

Unresolved directive in getting_started/openshift.adoc - include::common/install-procedure.adoc[leveloffset+=1]

This guide uses a shell script for deploying EnMasse. Windows users are advised to look at Installing EnMasse.

Deploying EnMasse
Using script

Invoke the deployment script to deploy EnMasse

./deploy.sh -m "https://localhost:8443" -n enmasse  -t openshift
Using ansible
ansible-playbook -i ansible/inventory/singletenant-standard.example ansible/playbooks/openshift/deploy_all.yml

This will create the deployments required for running EnMasse. Starting up EnMasse will take a while, usually depending on how fast it is able to download the docker images for the various components. In the meantime, you can start to create your address configuration.

Configuring addresses
Address types

EnMasse is configured with a set of addresses that you can use for messages. Currently, EnMasse supports 4 different address types:

  • Brokered queues

  • Brokered topics (pub/sub)

  • Direct anycast addresses

  • Direct broadcast addresses

See the Address Model for details. EnMasse also comes with a console that you can use for managing addresses. You can get the console URL by running the following command:

echo "https://$(oc get route -o jsonpath='{.spec.host}' console)"

You can also deploy the addressing config using the REST API API. See the Configuring EnMasse using a REST API for details on the resources consumed by the API. Here is an example config with all 4 variants that you can save to addresses.json:

{
  "apiVersion": "enmasse.io/v1",
  "kind": "AddressList",
  "items": [
    {
      "spec": {
        "address": "myqueue",
        "type": "queue",
        "plan": "sharded-queue"
      }
    },
    {
      "spec": {
        "address": "mytopic",
        "type": "topic",
        "plan": "sharded-topic"
      }
    },
    {
      "spec": {
        "address": "myanycast",
        "type": "anycast",
        "plan": "standard-anycast"
      }
    },
    {
      "spec": {
        "address": "mymulticast",
        "type": "multicast",
        "plan": "standard-multicast"
      }
    }
  ]
}

To deploy this configuration, you must currently use a http client like curl:

curl -X POST -H "content-type: application/json" --data-binary @addresses.json -k https://$(oc get route -o jsonpath='{.spec.host}' restapi)/apis/enmasse.io/v1alpha1/namespaces/[:namespace]/addressspaces/default/addresses

This will connect to the address controller REST API and deploy the address config in the 'default' address space.

Sending and receiving messages
Connecting with AMQP

For sending and receiving messages, have a look at an example python sender and receiver.

To send and receive messages, you should connect to the exposed route. To start a receiver, run:

./simple_recv.py -a "amqps://$(oc get route -o jsonpath='{.spec.host}' messaging):443/myanycast" -m 10

This will block until it has received 10 messages. To start the sender, run:

./simple_send.py -a "amqps://$(oc get route -o jsonpath='{.spec.host}' messaging):443/myanycast" -m 10

The server certificates is not verified in the above examples. To fetch the certificate, run:

mkdir -p certs
oc get secret external-certs-messaging -o jsonpath='{.data.tls\.crt}' | base64 -d > certs/tls.crt

You can modify the client code to use this cert to verify the server connection.

Have a look at Connecting to EnMasse for more client examples.

Connecting using MQTT

For sending and receiving messages route, you can use the paho-mqtt client library. To connect, fetch the server certificate:

mkdir -p certs
oc get secret external-certs-mqtt  -o jsonpath='{.data.tls\.crt}' | base64 -d > certs/tls.crt
Subscriber client

Save the following to tls_mqtt_recv.py or download:

#!/usr/bin/env python

import paho.mqtt.client as mqtt
import ssl
import optparse

# The callback for when the client receives a CONNACK response from the server.
def on_connect(client, userdata, flags, rc):
    print("Connected with result code " + str(rc))

    # Subscribing in on_connect() means that if we lose the connection and
    # reconnect then subscriptions will be renewed.
    client.subscribe(opts.topic, int(opts.qos))

# The callback for when a PUBLISH message is received from the server.
def on_message(client, userdata, msg):
    print(msg.topic + " " + str(msg.payload))

def on_log(client, userdata, level, string):
    print(string)

parser = optparse.OptionParser(usage="usage: %prog [options]",
                               description="Receive messages from the supplied address.")

parser.add_option("-c", "--connectHost", default="localhost",
                  help="host to connect to (default %default)")

parser.add_option("-p", "--portHost", default="8883",
                  help="port to connect to (default %default)")

parser.add_option("-t", "--topic", default="mytopic",
                  help="topic to subscribe to (default %default)")

parser.add_option("-q", "--qos", default="0",
                  help="quality of service (default %default)")

parser.add_option("-s", "--serverCert", default=None,
                  help="server certificate file path (default %default)")

opts, args = parser.parse_args()

client = mqtt.Client("recv")
client.on_connect = on_connect
client.on_message = on_message
client.on_log = on_log

context = ssl.create_default_context()
if opts.serverCert == None:
    context.check_hostname = False
    context.verify_mode = ssl.CERT_NONE
else:
    context.load_verify_locations(cafile=opts.serverCert)

# just useful to activate for decrypting local TLS traffic with Wireshark
#context.set_ciphers("RSA")

client.tls_set_context(context)
client.tls_insecure_set(True)
client.connect(opts.connectHost, opts.portHost, 60)

# Blocking call that processes network traffic, dispatches callbacks and
# handles reconnecting.
# Other loop*() functions are available that give a threaded interface and a
# manual interface.
client.loop_forever()

In order to subscribe to a topic (i.e. mytopic from the previous addresses configuration), the subscriber client can be used in the following way:

./tls_mqtt_recv.py -c "$(oc get route -o jsonpath='{.spec.host}' mqtt)" -p 443 -t mytopic -q 1 -s ./certs/tls.crt
Publisher client

Save the following to tls_mqtt_send.py or download:

#!/usr/bin/env python

import paho.mqtt.client as mqtt
import ssl
import optparse

# The callback for when the client receives a CONNACK response from the server.
def on_connect(client, userdata, flags, rc):
    print("Connected with result code " + str(rc))

    # Subscribing in on_connect() means that if we lose the connection and
    # reconnect then subscriptions will be renewed.
    client.publish(opts.topic, opts.message, int(opts.qos))

def on_publish(client, userdata, mid):
    print("mid: " + str(mid))
    client.disconnect()

def on_log(client, userdata, level, string):
    print(string)

parser = optparse.OptionParser(usage="usage: %prog [options]",
                               description="Sends messages to the supplied address.")

parser.add_option("-c", "--connectHost", default="localhost",
                  help="host to connect to (default %default)")

parser.add_option("-p", "--portHost", default="8883",
                  help="port to connect to (default %default)")

parser.add_option("-t", "--topic", default="mytopic",
                  help="topic to subscribe to (default %default)")

parser.add_option("-q", "--qos", default="0",
                  help="quality of service (default %default)")

parser.add_option("-s", "--serverCert", default=None,
                  help="server certificate file path (default %default)")

parser.add_option("-m", "--message", default="Hello",
                  help="message to publish (default %default)")

opts, args = parser.parse_args()

client = mqtt.Client("send")
client.on_connect = on_connect
client.on_publish = on_publish
client.on_log = on_log

context = ssl.create_default_context()
if opts.serverCert == None:
    context.check_hostname = False
    context.verify_mode = ssl.CERT_NONE
else:
    context.load_verify_locations(cafile=opts.serverCert)

# just useful to activate for decrypting local TLS traffic with Wireshark
#context.set_ciphers("RSA")

client.tls_set_context(context)
client.tls_insecure_set(True)
client.connect(opts.connectHost, opts.portHost, 60)

# Blocking call that processes network traffic, dispatches callbacks and
# handles reconnecting.
# Other loop*() functions are available that give a threaded interface and a
# manual interface.
client.loop_forever()

To start the publisher, the client can be used in the following way:

./tls_mqtt_send.py -c "$(oc get route -o jsonpath='{.spec.host}' mqtt)" -p 443 -t mytopic -q 1 -s ./certs/tls.crt -m "Hello EnMasse"

The the publisher publishes the message and disconnects from EnMasse. The message is received by the previous connected subscriber.

Conclusion

We have seen how to setup EnMasse on OpenShift, and how to communicate with it using AMQP and MQTT clients.

A.2. EnMasse on Kubernetes

This guide will walk through the process of setting up EnMasse on a Kubernetes cluster together with clients for sending and receiving messages. The guide will deploy EnMasse in a single tenant mode and with the none authentication service.

Unresolved directive in getting_started/kubernetes.adoc - include::common/prerequisites-kubernetes.adoc[leveloffset=+1]

Installing

Unresolved directive in getting_started/kubernetes.adoc - include::common/install-procedure.adoc[leveloffset+=1]

This guide uses a shell script for deploying EnMasse. Windows users are advised to look at Installing.

Deploying EnMasse
Using script

Invoke the deployment script to deploy EnMasse

./deploy.sh -m "https://localhost:8443" -n enmasse  -t kubernetes
Using ansible
ansible-playbook -i ansible/inventory/singletenant-standard.example ansible/playbooks/openshift/deploy_all.yml

This will create the deployments required for running EnMasse. Starting up EnMasse will take a while, usually depending on how fast it is able to download the docker images for the various components. In the meantime, you can start to create your address configuration.

Role Based Access Control (RBAC)

The Kubernetes deployment script and YAML files currently do not support Role Based Access Control (RBAC). In Kubernetes clusters which have RBAC enabled, it is required to additionally create a role binding for the default service account to the edit role and for the enmasse-admin to the cluster-admin and admin roles:

kubectl create clusterrolebinding enmasse-admin-cluster-binding --clusterrole=cluster-admin --serviceaccount=enmasse:enmasse-admin
kubectl create rolebinding default-edit-binding --clusterrole=edit --serviceaccount=enmasse:default -n enmasse
kubectl create rolebinding enmasse-admin-admin-binding --clusterrole=admin --serviceaccount=enmasse:enmasse-admin -n enmasse

Note: The cluster-admin role gives the enmasse-admin service account unlimited access to the Kubernetes cluster.

Configuring addresses
Address types

EnMasse is configured with a set of addresses that you can use for messages. Currently, EnMasse supports 4 different address types:

  • Brokered queues

  • Brokered topics (pub/sub)

  • Direct anycast addresses

  • Direct broadcast addresses

See the Address Model for details. EnMasse also comes with a console that you can use for managing addresses. You can get the console URL by running the following command:

echo "https://$(kubectl get ingress -o jsonpath='{.spec.host}' console)"

You can also deploy the addressing config using the REST API API. See the Configuring EnMasse using a REST API for details on the resources consumed by the API. Here is an example config with all 4 variants that you can save to addresses.json:

{
  "apiVersion": "enmasse.io/v1",
  "kind": "AddressList",
  "items": [
    {
      "spec": {
        "address": "myqueue",
        "type": "queue",
        "plan": "sharded-queue"
      }
    },
    {
      "spec": {
        "address": "mytopic",
        "type": "topic",
        "plan": "sharded-topic"
      }
    },
    {
      "spec": {
        "address": "myanycast",
        "type": "anycast",
        "plan": "standard-anycast"
      }
    },
    {
      "spec": {
        "address": "mymulticast",
        "type": "multicast",
        "plan": "standard-multicast"
      }
    }
  ]
}

To deploy this configuration, you must currently use a http client like curl:

curl -X POST -H "content-type: application/json" --data-binary @addresses.json -k https://$(kubectl get ingress -o jsonpath='{.spec.host}' restapi)/apis/enmasse.io/v1alpha1/namespaces/[:namespace]/addressspaces/default/addresses

This will connect to the address controller REST API and deploy the address config in the 'default' address space.

Sending and receiving messages
Connecting with AMQP

For sending and receiving messages, have a look at an example python sender and receiver.

To send and receive messages, you should connect to the exposed route. To start a receiver, run:

./simple_recv.py -a "amqps://$(kubectl get ingress -o jsonpath='{.spec.host}' messaging):443/myanycast" -m 10

This will block until it has received 10 messages. To start the sender, run:

./simple_send.py -a "amqps://$(kubectl get ingress -o jsonpath='{.spec.host}' messaging):443/myanycast" -m 10

The server certificates is not verified in the above examples. To fetch the certificate, run:

mkdir -p certs
kubectl get secret external-certs-messaging -o jsonpath='{.data.tls\.crt}' | base64 -d > certs/tls.crt

You can modify the client code to use this cert to verify the server connection.

Have a look at Connecting to EnMasse for more client examples.

Connecting using MQTT

For sending and receiving messages route, you can use the paho-mqtt client library. To connect, fetch the server certificate:

mkdir -p certs
kubectl get secret external-certs-mqtt  -o jsonpath='{.data.tls\.crt}' | base64 -d > certs/tls.crt
Subscriber client

Save the following to tls_mqtt_recv.py or download:

#!/usr/bin/env python

import paho.mqtt.client as mqtt
import ssl
import optparse

# The callback for when the client receives a CONNACK response from the server.
def on_connect(client, userdata, flags, rc):
    print("Connected with result code " + str(rc))

    # Subscribing in on_connect() means that if we lose the connection and
    # reconnect then subscriptions will be renewed.
    client.subscribe(opts.topic, int(opts.qos))

# The callback for when a PUBLISH message is received from the server.
def on_message(client, userdata, msg):
    print(msg.topic + " " + str(msg.payload))

def on_log(client, userdata, level, string):
    print(string)

parser = optparse.OptionParser(usage="usage: %prog [options]",
                               description="Receive messages from the supplied address.")

parser.add_option("-c", "--connectHost", default="localhost",
                  help="host to connect to (default %default)")

parser.add_option("-p", "--portHost", default="8883",
                  help="port to connect to (default %default)")

parser.add_option("-t", "--topic", default="mytopic",
                  help="topic to subscribe to (default %default)")

parser.add_option("-q", "--qos", default="0",
                  help="quality of service (default %default)")

parser.add_option("-s", "--serverCert", default=None,
                  help="server certificate file path (default %default)")

opts, args = parser.parse_args()

client = mqtt.Client("recv")
client.on_connect = on_connect
client.on_message = on_message
client.on_log = on_log

context = ssl.create_default_context()
if opts.serverCert == None:
    context.check_hostname = False
    context.verify_mode = ssl.CERT_NONE
else:
    context.load_verify_locations(cafile=opts.serverCert)

# just useful to activate for decrypting local TLS traffic with Wireshark
#context.set_ciphers("RSA")

client.tls_set_context(context)
client.tls_insecure_set(True)
client.connect(opts.connectHost, opts.portHost, 60)

# Blocking call that processes network traffic, dispatches callbacks and
# handles reconnecting.
# Other loop*() functions are available that give a threaded interface and a
# manual interface.
client.loop_forever()

In order to subscribe to a topic (i.e. mytopic from the previous addresses configuration), the subscriber client can be used in the following way:

./tls_mqtt_recv.py -c "$(oc get route -o jsonpath='{.spec.host}' mqtt)" -p 443 -t mytopic -q 1 -s ./certs/tls.crt
Publisher client

Save the following to tls_mqtt_send.py or download:

#!/usr/bin/env python

import paho.mqtt.client as mqtt
import ssl
import optparse

# The callback for when the client receives a CONNACK response from the server.
def on_connect(client, userdata, flags, rc):
    print("Connected with result code " + str(rc))

    # Subscribing in on_connect() means that if we lose the connection and
    # reconnect then subscriptions will be renewed.
    client.publish(opts.topic, opts.message, int(opts.qos))

def on_publish(client, userdata, mid):
    print("mid: " + str(mid))
    client.disconnect()

def on_log(client, userdata, level, string):
    print(string)

parser = optparse.OptionParser(usage="usage: %prog [options]",
                               description="Sends messages to the supplied address.")

parser.add_option("-c", "--connectHost", default="localhost",
                  help="host to connect to (default %default)")

parser.add_option("-p", "--portHost", default="8883",
                  help="port to connect to (default %default)")

parser.add_option("-t", "--topic", default="mytopic",
                  help="topic to subscribe to (default %default)")

parser.add_option("-q", "--qos", default="0",
                  help="quality of service (default %default)")

parser.add_option("-s", "--serverCert", default=None,
                  help="server certificate file path (default %default)")

parser.add_option("-m", "--message", default="Hello",
                  help="message to publish (default %default)")

opts, args = parser.parse_args()

client = mqtt.Client("send")
client.on_connect = on_connect
client.on_publish = on_publish
client.on_log = on_log

context = ssl.create_default_context()
if opts.serverCert == None:
    context.check_hostname = False
    context.verify_mode = ssl.CERT_NONE
else:
    context.load_verify_locations(cafile=opts.serverCert)

# just useful to activate for decrypting local TLS traffic with Wireshark
#context.set_ciphers("RSA")

client.tls_set_context(context)
client.tls_insecure_set(True)
client.connect(opts.connectHost, opts.portHost, 60)

# Blocking call that processes network traffic, dispatches callbacks and
# handles reconnecting.
# Other loop*() functions are available that give a threaded interface and a
# manual interface.
client.loop_forever()

To start the publisher, the client can be used in the following way:

./tls_mqtt_send.py -c "$(kubectl get ingress -o jsonpath='{.spec.host}' mqtt)" -p 443 -t mytopic -q 1 -s ./certs/tls.crt -m "Hello EnMasse"

The the publisher publishes the message and disconnects from EnMasse. The message is received by the previous connected subscriber.

Conclusion

We have seen how to setup a messaging service in Kubernetes, and how to communicate with it using python example AMQP clients.

A.3. Setting up EnMasse on AWS

This guide walks you through setting up EnMasse on an AWS EC2 instance. This is not even very specific to AWS, so you can probably modify the configuration to fit Microsoft Azure or even Google GCE.

The end result from this guide is an instance of EnMasse suitable for development and/or experimentation, and should not be considered a production ready setup. For instance, no persistence is configured, so neither messages in brokers nor state in other components like hawkular are persisted.

Prerequisites

First, you must have created an EC2 instance. EnMasse runs on OpenShift and Kubernetes, but this post uses OpenShift purely for convenience. For the required hardware configuration, see the OpenShift prerequisites. The installation will be done using Ansible, so make sure Ansible is installed on laptop or workstation.

Configure Ansible to handle passwordless sudo

For EC2 instance, the default is a passwordless sudo, and Ansible (2.3.0.0 at the time of writing) requires a minor configuration modification to deal with that. On the host you will be running ansible from, edit /etc/ansible/ansible.cfg, and make sure that the sudo_flags parameter is set to -H -S (remove the -n).

Setting up OpenShift

Once Ansible is setup, installing OpenShift is easy. First, an inventory file with the configuration and the hosts must be created. Save the following configuration to a file, i.e. ansible-inventory.txt:

[OSEv3:children]
masters
nodes

[OSEv3:vars]
deployment_type=origin
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]
openshift_master_default_subdomain=<yourdomain>
openshift_public_hostname=openshift.<yourdomain>
openshift_hostname=<ec2 instance hostname>
openshift_metrics_hawkular_hostname=hawkular-metrics.<yourdomain>

openshift_install_examples=false
openshift_hosted_metrics_deploy=true

[masters]
<ec2 host> openshift_scheduleable=true openshift_node_labels="{'region': 'infra'}"

[nodes]
<ec2 host> openshift_scheduleable=true openshift_node_labels="{'region': 'infra'}"

This will configure OpenShift so that it can only be accessed by users defined in /etc/origin/master/htpasswd.

If you don’t have a domain with wildcard support, you can replace with .nip.io, and you will have a working setup without having a specialized domain.

You can now download the ansible playbooks. The simplest way to do this is to just clone the git repository:

git clone https://github.com/openshift/openshift-ansible.git

To install OpenShift, run the playbook like this

ansible-playbook -u ec2-user -b --private-key=<keyfile>.pem -i ansible-inventory.txt openshift-ansible/playbooks/byo/openshift-cluster/config.yml

This command will take a while to finish.

Creating a user

To be able to deploy EnMasse in OpenShift, a user must be created. Log on to your EC2 instance, and create the user:

htpasswd -c /etc/origin/master/htpasswd <myuser>

Where <myuser> is the username you want to use. The command will prompt you for a password that you will later use when deploying EnMasse.

Installing EnMasse

Unresolved directive in getting_started/aws.adoc - include::common/install-procedure.adoc[leveloffset+=1]

Deploying EnMasse
Using script

Invoke the deployment script to deploy EnMasse

./deploy.sh -m "https://openshift.yourdomain:8443" -n enmasse -u myuser -t openshift
Using ansible
ansible-playbook -i ansible/inventory/singletenant-standard.example ansible/playbooks/openshift/deploy_all.yml

This will create the deployments required for running EnMasse. Starting up EnMasse will take a while, usually depending on how fast it is able to download the docker images for the various components. In the meantime, you can start to create your address configuration. followed the above guide, you should have EnMasse deployed. The endpoints will be:

* AMQP: `messaging-enmasse.<yourdomain>`
* MQTT: `mqtt-enmasse.<yourdomain>`
* Console: `console-enmasse.<yourdomain>`

The console can be used for creating and deleting addresses.

Sending and receiving messages
Connecting with AMQP

For sending and receiving messages, have a look at an example python sender and receiver.

To send and receive messages, you should connect to the exposed route. To start a receiver, run:

./simple_recv.py -a "amqps://$(oc get route -o jsonpath='{.spec.host}' messaging):443/myanycast" -m 10

This will block until it has received 10 messages. To start the sender, run:

./simple_send.py -a "amqps://$(oc get route -o jsonpath='{.spec.host}' messaging):443/myanycast" -m 10

The server certificates is not verified in the above examples. To fetch the certificate, run:

mkdir -p certs
oc get secret external-certs-messaging -o jsonpath='{.data.tls\.crt}' | base64 -d > certs/tls.crt

You can modify the client code to use this cert to verify the server connection.

Have a look at Connecting to EnMasse for more client examples.

Connecting using MQTT

For sending and receiving messages route, you can use the paho-mqtt client library. To connect, fetch the server certificate:

mkdir -p certs
oc get secret external-certs-mqtt  -o jsonpath='{.data.tls\.crt}' | base64 -d > certs/tls.crt
Subscriber client

Save the following to tls_mqtt_recv.py or download:

#!/usr/bin/env python

import paho.mqtt.client as mqtt
import ssl
import optparse

# The callback for when the client receives a CONNACK response from the server.
def on_connect(client, userdata, flags, rc):
    print("Connected with result code " + str(rc))

    # Subscribing in on_connect() means that if we lose the connection and
    # reconnect then subscriptions will be renewed.
    client.subscribe(opts.topic, int(opts.qos))

# The callback for when a PUBLISH message is received from the server.
def on_message(client, userdata, msg):
    print(msg.topic + " " + str(msg.payload))

def on_log(client, userdata, level, string):
    print(string)

parser = optparse.OptionParser(usage="usage: %prog [options]",
                               description="Receive messages from the supplied address.")

parser.add_option("-c", "--connectHost", default="localhost",
                  help="host to connect to (default %default)")

parser.add_option("-p", "--portHost", default="8883",
                  help="port to connect to (default %default)")

parser.add_option("-t", "--topic", default="mytopic",
                  help="topic to subscribe to (default %default)")

parser.add_option("-q", "--qos", default="0",
                  help="quality of service (default %default)")

parser.add_option("-s", "--serverCert", default=None,
                  help="server certificate file path (default %default)")

opts, args = parser.parse_args()

client = mqtt.Client("recv")
client.on_connect = on_connect
client.on_message = on_message
client.on_log = on_log

context = ssl.create_default_context()
if opts.serverCert == None:
    context.check_hostname = False
    context.verify_mode = ssl.CERT_NONE
else:
    context.load_verify_locations(cafile=opts.serverCert)

# just useful to activate for decrypting local TLS traffic with Wireshark
#context.set_ciphers("RSA")

client.tls_set_context(context)
client.tls_insecure_set(True)
client.connect(opts.connectHost, opts.portHost, 60)

# Blocking call that processes network traffic, dispatches callbacks and
# handles reconnecting.
# Other loop*() functions are available that give a threaded interface and a
# manual interface.
client.loop_forever()

In order to subscribe to a topic (i.e. mytopic from the previous addresses configuration), the subscriber client can be used in the following way:

./tls_mqtt_recv.py -c "$(oc get route -o jsonpath='{.spec.host}' mqtt)" -p 443 -t mytopic -q 1 -s ./certs/tls.crt
Publisher client

Save the following to tls_mqtt_send.py or download:

#!/usr/bin/env python

import paho.mqtt.client as mqtt
import ssl
import optparse

# The callback for when the client receives a CONNACK response from the server.
def on_connect(client, userdata, flags, rc):
    print("Connected with result code " + str(rc))

    # Subscribing in on_connect() means that if we lose the connection and
    # reconnect then subscriptions will be renewed.
    client.publish(opts.topic, opts.message, int(opts.qos))

def on_publish(client, userdata, mid):
    print("mid: " + str(mid))
    client.disconnect()

def on_log(client, userdata, level, string):
    print(string)

parser = optparse.OptionParser(usage="usage: %prog [options]",
                               description="Sends messages to the supplied address.")

parser.add_option("-c", "--connectHost", default="localhost",
                  help="host to connect to (default %default)")

parser.add_option("-p", "--portHost", default="8883",
                  help="port to connect to (default %default)")

parser.add_option("-t", "--topic", default="mytopic",
                  help="topic to subscribe to (default %default)")

parser.add_option("-q", "--qos", default="0",
                  help="quality of service (default %default)")

parser.add_option("-s", "--serverCert", default=None,
                  help="server certificate file path (default %default)")

parser.add_option("-m", "--message", default="Hello",
                  help="message to publish (default %default)")

opts, args = parser.parse_args()

client = mqtt.Client("send")
client.on_connect = on_connect
client.on_publish = on_publish
client.on_log = on_log

context = ssl.create_default_context()
if opts.serverCert == None:
    context.check_hostname = False
    context.verify_mode = ssl.CERT_NONE
else:
    context.load_verify_locations(cafile=opts.serverCert)

# just useful to activate for decrypting local TLS traffic with Wireshark
#context.set_ciphers("RSA")

client.tls_set_context(context)
client.tls_insecure_set(True)
client.connect(opts.connectHost, opts.portHost, 60)

# Blocking call that processes network traffic, dispatches callbacks and
# handles reconnecting.
# Other loop*() functions are available that give a threaded interface and a
# manual interface.
client.loop_forever()

To start the publisher, the client can be used in the following way:

./tls_mqtt_send.py -c "$(oc get route -o jsonpath='{.spec.host}' mqtt)" -p 443 -t mytopic -q 1 -s ./certs/tls.crt -m "Hello EnMasse"

The the publisher publishes the message and disconnects from EnMasse. The message is received by the previous connected subscriber.

(Optional) Setting up metrics

The process for setting up grafana is a bit more involved, but it gives you a nice overview of whats going on over time. First of all, I like to setup everything metric-related in the openshift-infra project. To do that, you must first give your user permission sufficient privileges. In this setup, since it’s not a production setup, I grant cluster-admin privileges for simplicity (requires logging into the ec2 instance):

oc adm --config /etc/origin/master/admin.kubeconfig policy add-cluster-role-to-user cluster-admin developer

With this in place, you can setup the hawkular-openshift-agent which pulls metrics from routers and brokers:

oc create -f https://raw.githubusercontent.com/openshift/origin-metrics/master/hawkular-agent/hawkular-openshift-agent-configmap.yaml -n openshift-infra
oc process -f https://raw.githubusercontent.com/openshift/origin-metrics/master/hawkular-agent/hawkular-openshift-agent.yaml IMAGE_VERSION=1.4.0.Final | oc create -n openshift-infra -f -
oc adm policy add-cluster-role-to-user hawkular-openshift-agent system:serviceaccount:openshift-infra:hawkular-openshift-agent

If everything is setup correctly, you can then deploy Grafana:

oc process -f https://raw.githubusercontent.com/hawkular/hawkular-grafana-datasource/master/openshift/openshift-template-ephemeral.yaml -n openshift-infra | oc create -n openshift-infra -f -

After some time, Grafana should become available at oc get route -n openshift-infra -o jsonpath='{.spec.host}' hawkular-grafana. The default username and password is admin/admin. E

Summary

In this post, you’ve seen how to:

  • Deploy OpenShift on an AWS EC2 instance

  • Deploy EnMasse cloud messaging

  • Deploy Grafana for monitoring

Appendix B: REST API Reference

B.1. EnMasse

B.1.1. Overview

This is the EnMasse API specification.

Version information

Version : 1.0.0

URI scheme

Schemes : HTTPS

Tags
  • addresses : Operating on Addresses.

  • addressspaces : Operate on AddressSpaces

External Docs

Description : Find out more about EnMasse
URL : http://enmasse.io

B.1.2. Paths

POST /apis/enmasse.io/v1alpha1/namespaces/{namespace}/addresses
Description

create an Address

Parameters
Type Name Description Schema

Path

namespace
required

object name and auth scope, such as for teams and projects

string

Body

body
required

Responses
HTTP Code Description Schema

200

OK

201

Created

401

Unauthorized

No Content

Consumes
  • application/json

Produces
  • application/json

Tags
  • addresses

  • enmasse_v1alpha1

GET /apis/enmasse.io/v1alpha1/namespaces/{namespace}/addresses
Description

list objects of kind Address

Parameters
Type Name Description Schema

Path

namespace
required

object name and auth scope, such as for teams and projects

string

Query

labelSelector
optional

A selector to restrict the list of returned objects by their labels. Defaults to everything.

string

Responses
HTTP Code Description Schema

200

OK

401

Unauthorized

No Content

Produces
  • application/json

Tags
  • addresses

  • enmasse_v1alpha1

GET /apis/enmasse.io/v1alpha1/namespaces/{namespace}/addresses/{name}
Description

read the specified Address

Parameters
Type Name Description Schema

Path

name
required

Name of Address to read

string

Path

namespace
required

object name and auth scope, such as for teams and projects

string

Responses
HTTP Code Description Schema

200

OK

401

Unauthorized

No Content

404

Not found

No Content

Consumes
  • application/json

Produces
  • application/json

Tags
  • addresses

  • enmasse_v1alpha1

PUT /apis/enmasse.io/v1alpha1/namespaces/{namespace}/addresses/{name}
Description

replace the specified Address

Parameters
Type Name Description Schema

Path

name
required

Name of Address to replace

string

Path

namespace
required

object name and auth scope, such as for teams and projects

string

Body

body
required

Responses
HTTP Code Description Schema

200

OK

201

Created

401

Unauthorized

No Content

Produces
  • application/json

Tags
  • addresses

  • enmasse_v1alpha1

DELETE /apis/enmasse.io/v1alpha1/namespaces/{namespace}/addresses/{name}
Description

delete an Address

Parameters
Type Name Description Schema

Path

name
required

Name of Address to delete

string

Path

namespace
required

object name and auth scope, such as for teams and projects

string

Responses
HTTP Code Description Schema

200

OK

401

Unauthorized

No Content

404

Not found

No Content

Produces
  • application/json

Tags
  • addresses

  • enmasse_v1alpha1

POST /apis/enmasse.io/v1alpha1/namespaces/{namespace}/addressspaces
Description

create an AddressSpace

Parameters
Type Name Description Schema

Path

namespace
required

object name and auth scope, such as for teams and projects

string

Body

body
required

Responses
HTTP Code Description Schema

200

OK

201

Created

401

Unauthorized

No Content

Consumes
  • application/json

Produces
  • application/json

Tags
  • addressspaces

  • enmasse_v1alpha1

GET /apis/enmasse.io/v1alpha1/namespaces/{namespace}/addressspaces
Description

list objects of kind AddressSpace

Parameters
Type Name Description Schema

Path

namespace
required

object name and auth scope, such as for teams and projects

string

Query

labelSelector
optional

A selector to restrict the list of returned objects by their labels. Defaults to everything.

string

Responses
HTTP Code Description Schema

200

OK

401

Unauthorized

No Content

Produces
  • application/json

Tags
  • addressspaces

  • enmasse_v1alpha1

POST /apis/enmasse.io/v1alpha1/namespaces/{namespace}/addressspaces/{addressSpace}/addresses
Description

create Addresses in an AddressSpace

Parameters
Type Name Description Schema

Path

addressSpace
required

Name of AddressSpace

string

Path

namespace
required

object name and auth scope, such as for teams and projects

string

Body

body
required

AddressList object

Responses
HTTP Code Description Schema

200

OK

401

Unauthorized

No Content

404

Not found

No Content

Consumes
  • application/json

Produces
  • application/json

Tags
  • addressspace_addresses

  • enmasse_v1alpha1

GET /apis/enmasse.io/v1alpha1/namespaces/{namespace}/addressspaces/{addressSpace}/addresses
Description

list objects of kind Address in AddressSpace

Parameters
Type Name Description Schema

Path

addressSpace
required

Name of AddressSpace

string

Path

namespace
required

object name and auth scope, such as for teams and projects

string

Responses
HTTP Code Description Schema

200

OK

401

Unauthorized

No Content

404

Not found

No Content

Produces
  • application/json

Tags
  • addressspace_addresses

  • enmasse_v1alpha1

GET /apis/enmasse.io/v1alpha1/namespaces/{namespace}/addressspaces/{addressSpace}/addresses/{address}
Description

read the specified Address in AddressSpace

Parameters
Type Name Description Schema

Path

address
required

Name of Address

string

Path

addressSpace
required

Name of AddressSpace

string

Path

namespace
required

object name and auth scope, such as for teams and projects

string

Responses
HTTP Code Description Schema

200

OK

401

Unauthorized

No Content

404

Not found

No Content

Produces
  • application/json

Tags
  • addressspace_addresses

  • enmasse_v1alpha1

PUT /apis/enmasse.io/v1alpha1/namespaces/{namespace}/addressspaces/{addressSpace}/addresses/{address}
Description

replace Address in an AddressSpace

Parameters
Type Name Description Schema

Path

address
required

Name of address

string

Path

addressSpace
required

Name of AddressSpace

string

Path

namespace
required

object name and auth scope, such as for teams and projects

string

Body

body
required

Address object

Responses
HTTP Code Description Schema

200

OK

201

Created

401

Unauthorized

No Content

404

Not found

No Content

Consumes
  • application/json

Produces
  • application/json

Tags
  • addressspace_addresses

  • enmasse_v1alpha1

DELETE /apis/enmasse.io/v1alpha1/namespaces/{namespace}/addressspaces/{addressSpace}/addresses/{address}
Description

delete an Address in AddressSpace

Parameters
Type Name Description Schema

Path

address
required

Name of Address

string

Path

addressSpace
required

Name of AddressSpace

string

Path

namespace
required

object name and auth scope, such as for teams and projects

string

Responses
HTTP Code Description Schema

200

OK

401

Unauthorized

No Content

404

Not found

No Content

Produces
  • application/json

Tags
  • addressspace_addresses

  • enmasse_v1alpha1

GET /apis/enmasse.io/v1alpha1/namespaces/{namespace}/addressspaces/{name}
Description

read the specified AddressSpace

Parameters
Type Name Description Schema

Path

name
required

Name of AddressSpace to read

string

Path

namespace
required

object name and auth scope, such as for teams and projects

string

Responses
HTTP Code Description Schema

200

OK

401

Unauthorized

No Content

404

Not found

No Content

Consumes
  • application/json

Produces
  • application/json

Tags
  • addressspaces

  • enmasse_v1alpha1

PUT /apis/enmasse.io/v1alpha1/namespaces/{namespace}/addressspaces/{name}
Description

replace the specified AddressSpace

Parameters
Type Name Description Schema

Path

name
required

Name of AddressSpace to replace

string

Path

namespace
required

object name and auth scope, such as for teams and projects

string

Body

body
required

Responses
HTTP Code Description Schema

200

OK

201

Created

401

Unauthorized

No Content

Produces
  • application/json

Tags
  • addressspaces

  • enmasse_v1alpha1

DELETE /apis/enmasse.io/v1alpha1/namespaces/{namespace}/addressspaces/{name}
Description

delete an AddressSpace

Parameters
Type Name Description Schema

Path

name
required

Name of AddressSpace to delete

string

Path

namespace
required

object name and auth scope, such as for teams and projects

string

Responses
HTTP Code Description Schema

200

OK

401

Unauthorized

No Content

404

Not found

No Content

Produces
  • application/json

Tags
  • addressspaces

  • enmasse_v1alpha1

B.1.3. Definitions

ObjectMeta

ObjectMeta is metadata that all persisted resources must have, which includes all objects users must create.

Name Schema

name
required

string

namespace
optional

string

Status

Status is a return value for calls that don’t return other objects.

Name Description Schema

code
optional

Suggested HTTP return code for this status, 0 if not set.

integer (int32)

io.enmasse.v1alpha1.Address
Name Schema

apiVersion
required

enum (enmasse.io/v1alpha1)

kind
required

enum (Address)

metadata
required

spec
required

status
optional

io.enmasse.v1alpha1.AddressList
Name Description Schema

apiVersion
required

Default : "enmasse.io/v1alpha1"

enum (enmasse.io/v1alpha1)

items
required

kind
required

enum (AddressList)

io.enmasse.v1alpha1.AddressSpace
Name Schema

apiVersion
required

enum (enmasse.io/v1alpha1)

kind
required

enum (AddressSpace)

metadata
required

spec
required

status
optional

io.enmasse.v1alpha1.AddressSpaceList
Name Description Schema

apiVersion
required

Default : "enmasse.io/v1alpha1"

enum (enmasse.io/v1alpha1)

items
required

kind
required

enum (AddressSpaceList)

io.enmasse.v1alpha1.AddressSpaceSpec
Name Schema

authenticationService
optional

endpoints
optional

< endpoints > array

plan
required

string

type
required

authenticationService

Name Schema

details
optional

object

type
optional

string

endpoints

Name Schema

cert
optional

host
optional

string

name
optional

string

service
optional

string

servicePort
optional

string

cert

Name Schema

provider
optional

string

secretName
optional

string

io.enmasse.v1alpha1.AddressSpaceStatus
Name Schema

endpointStatuses
optional

< endpointStatuses > array

isReady
optional

boolean

messages
optional

< string > array

endpointStatuses

Name Schema

host
optional

string

name
optional

string

port
optional

integer

serviceHost
optional

string

servicePorts
optional

< servicePorts > array

servicePorts

Name Schema

name
optional

string

port
optional

integer

io.enmasse.v1alpha1.AddressSpaceType

AddressSpaceType is the type of address space (standard, brokered). Each type supports different types of addresses and semantics for those types.

Type : enum (standard, brokered)

io.enmasse.v1alpha1.AddressSpec
Name Schema

address
required

string

plan
required

string

type
required

io.enmasse.v1alpha1.AddressStatus
Name Schema

isReady
optional

boolean

messages
optional

< string > array

phase
optional

enum (Pending, Configuring, Active, Failed, Terminating)

io.enmasse.v1alpha1.AddressType

Type of address (queue, topic, …). Each address type support different kinds of messaging semantics.

Type : enum (queue, topic, anycast, multicast)