Skip to content

aerogear-attic/app-metrics-operator

Repository files navigation

App Metrics Operator

Project health

Build Status (CircleCI) Coverage Status (Coveralls) License (License) Go Report Card (Go Report Card)

Getting Started

Cloning the repository

By the following commands you will create a local directory and clone this project.

$ git clone [email protected]:aerogear/app-metrics-operator.git $GOPATH/src/github.com/aerogear/app-metrics-operator

Minishift installation and setup

Install Minishift then install Operators on it by running the following commands.

# create a new profile to test the operator
$ minishift profile set app-metrics-operator

# enable the admin-user add-on
$ minishift addon enable admin-user

# add insecure registry to download the images from docker
$ minishift config set insecure-registry 172.30.0.0/16

# start the instance
$ minishift start
ℹ️
The above steps are not required in OCP > 4 since the OLM and Operators came installed by default.

Installation

As a user with admin permissions, you can install the app-metrics-operator and a sample CR in your OpenShift cluster as follows:

make cluster/prepare
make install

Configuration

Image Streams

The operator uses 2 image streams and what image streams to use are configurable with environment variables.

App Metrics image stream is created within the same namespace by the operator. However, for Postgres the image stream in openshift namespace is used.

The following table shows the available environment variable names, along with their default values:

Environment Variables
Name Default Purpose

APP_METRICS_IMAGE_STREAM_NAME

app-metrics-imagestream

Name of the App Metrics image stream that will be created by the operator.

APP_METRICS_IMAGE_STREAM_TAG

latest

Tag of the App Metrics image stream that will be created by the operator.

APP_METRICS_IMAGE_STREAM_INITIAL_IMAGE

docker.io/aerogear/aerogear-app-metrics:0.0.13

Initial image for the App Metrics image stream that will be created by the operator.

POSTGRES_IMAGE_STREAM_NAMESPACE

openshift

Namespace to look for the Postgres image stream.

POSTGRES_IMAGE_STREAM_NAME

postgresql

Name of the Postgres image stream to look for.

POSTGRES_IMAGE_STREAM_TAG

10

Tag of the Postgres image stream.

🔥
Re-deploying this operator with customized images will cause all instances owned by the operator to be updated.

Container Names

If you would like to modify the container names, you can use the following environment variables.

Environment Variables
Name Default

APP_METRICS_CONTAINER_NAME

appmetrics

POSTGRES_CONTAINER_NAME

postgresql

Backups

The BACKUP_IMAGE environment variable configures what image to use for backing up the custom resources created by this operator. Default value is quay.io/integreatly/backup-container:1.0.8.

Monitoring Service (Metrics)

The application-monitoring stack provisioned by the application-monitoring-operator on Integr8ly can be used to gather metrics from this operator and the AppMetrics Server. These metrics can be used by Integr8ly’s application monitoring to generate Prometheus metrics, AlertManager alerts and a Grafana dashboard.

It is required that the integr8ly/Grafana and Prometheus operators are installed. For further detail see integr8ly/application-monitoring-operator.

The following command enables the monitoring service in the operator namespace:

make monitoring/install
The namespaces are setup manually in the files ServiceMonitor, Prometheus Rules, Operator Service, and Grafana Dashboard. Following an example from the Prometheus Rules. You should replace them if the operator is not installed in the default namespace.
  expr: |
          (1-absent(kube_pod_status_ready{condition="true", namespace="app-metrics"})) or sum(kube_pod_status_ready{condition="true", namespace="app-metrics"}) < 3

[source,shell]
ℹ️
The command make monitoring/uninstall will uninstall the Monitor Service.

Custom Resources (aka How to get value from this operator)

AppMetricsService

This is the main installation resource kind. Creation of a valid AppMetricsService CR will result in a functional App Metrics Service deployed to your namespace.

AppMetricsService has no fields that are configurable.

An example AppMetricsService resource is available at ./deploy/crds/metrics_v1alpha1_appmetricsservice_cr.yaml:

metrics_v1alpha1_appmetricsservice_cr.yaml
apiVersion: metrics.aerogear.org/v1alpha1
kind: AppMetricsService
metadata:
  name: example-appmetricsservice

To create this, you can run:

kubectl apply -n app-metrics -f ./deploy/crds/metrics_v1alpha1_appmetricsservice_cr.yaml

To see the created instance then, you can run:

kubectl get appmetricsservice example-appmetricsservice -n app-metrics -o yaml

AppMetricsConfig

This is the service consumption resource kind. Creation of a valid AppMetricsConfig CR will write the client config to a config map in the CR namespace.

AppMetricsConfig has no fields that are configurable.

An example AppMetricsConfig resource is available at ./deploy/crds/metrics_v1alpha1_appmetricsconfig_cr.yaml:

metrics_v1alpha1_appmetricsconfig_cr.yaml
apiVersion: metrics.aerogear.org/v1alpha1
kind: AppMetricsConfig
metadata:
  name: example-app

To create this, you can run:

kubectl apply -n app-metrics -f ./deploy/crds/metrics_v1alpha1_appmetricsconfig_cr.yaml

To see the created instance then, you can run:

kubectl get appmetricsconfig example-app -n app-metrics -o yaml

The config map created will have the name pattern <cr-app-name>-metrics. For the example above, you can run the following command to get the config map.

kubectl get configmap example-app-metrics -n app-metrics -o yaml

It will have content similar to this:

apiVersion: v1
data:
  SDKConfig: >-
    {"url":
    "https://example-appmetricsservice-appmetrics-app-metrics.openshift.cluster.hostname"}
kind: ConfigMap

Development

Prerequisites

  • Access to an OpenShift cluster with admin privileges to be able to create Roles. Minishift is suggested.

  • Go, Make, dep, operator-sdk, kubectl (kubectl can just be a symlink to oc)

Running the operator

  1. Prepare the operator project:

make cluster/prepare
  1. Run the operator (locally, not in OpenShift):

make code/run
  1. Create an App Metrics Service instance (in another terminal):

make install
  1. Watch the status of your App Metrics Service instance provisioning (optional):

watch -n1 "kubectl get po -n app-metrics && echo '' && kubectl get appmetricsservice -o yaml -n app-metrics"
  1. If you want to be able to work with resources that require the local instance of your operator to be able to talk to the App Metrics instance in the cluster, then you’ll need to make a corresponding domain name available locally. Something like the following should work, by adding an entry to /etc/hosts for the example Service that’s created, then forwarding the port from the relevant Pod in the cluster to the local machine. Run this in a separate terminal, and ctrl+c to clean it up when finished:

  2. Create an App Metrics Config instance:

make example-app/apply
  1. Watch the status of your App Metrics Config (optional):

watch -n1 "kubectl get po -n app-metrics && echo '' && kubectl get appmetricsconfig -o yaml -n app-metrics"
  1. Check the config map created:

kubectl get configmap -n app-metrics example-app-metrics -o yaml
  1. When finished, clean up:

make cluster/clean

Publishing images

Images are automatically built and pushed to our image repository by the Jenkins in the following cases:

  • For every change merged to master a new image with the master tag is published.

  • For every change merged that has a git tag a new image with the <operator-version> and latest tags are published.

Tags Release

Following the steps

  1. Create a new version tag following the semver, for example 0.1.0

  2. Bump the version in the version.go file.

  3. Update the the CHANGELOG.MD with the new release.

  4. Update any tag references in all SOP files (e.g https://github.com/aerogear/app-metrics-operator/blob/0.1.0/SOP/SOP-operator.adoc)

  5. Create a git tag with the version value, for example:

    $ git tag -a 0.1.0 -m "version 0.1.0"
  6. Push the new tag to the upstream repository, this will trigger an automated release by the Jenkins, for example:

    $ git push upstream 0.1.0
    ℹ️
    The image with the tag will be created and pushed to the unifiedpush-operator image hosting repository by the Jenkins.

Architecture

This operator is cluster-scoped. For further information see the Operator Scope section in the Operator Framework documentation. Also, check its roles in Deploy directory.

ℹ️
The operator, application and database will be installed in the namespace which will be created by this project.

CI/CD

CircleCI

  • Coveralls

  • Unit Tests

ℹ️
See the config.yml.

Jenkins

  • Integration Tests

  • Build of images

ℹ️
See the Jenkinsfile.

Makefile command reference

Application

Command

Description

make install

Creates the {namespace} namespace, application CRDS, cluster role and service account.

make cluster/clean

It will delete what was performed in the make cluster/prepare .

make example-app/apply

Create an Example App Metrics Config instance

make cluster/prepare

It will apply all less the operator.yaml.

make monitoring/install

Installs Monitoring Service in order to provide metrics

make monitoring/uninstall

Uninstalls Monitoring Service in order to provide metrics, i.e. all configuration applied by make monitoring/install

Local Development

make code/run

Runs the operator locally for development purposes.

make code/gen

Sets up environment for debugging proposes.

make code/vet

Examines source code and reports suspicious constructs using vet.

make code/fix

Formats code using gofmt.

Jenkins

make test/compile

Compile image to be used in the e2e tests

make code/compile

Compile image to be used by Jenkins

Tests / CI

make test/integration-cover

It will run the coveralls.

make test/unit

Runs unit tests

make code/build/linux

Build image with the parameters required for CircleCI

ℹ️
The Makefile is implemented with tasks which you should use to work with.

Supportability

This operator was developed using the Kubernetes and Openshift APIs.

Currently this project requires the usage of the v1.Route to expose the service and OAuth-proxy for authentication which make it unsupportable for Kubernetes. Also, it is using ImageStream which is from the OpenShift API specifically. In this way, this project is not compatible with Kubernetes, however, in future we aim to make it work on vanilla Kubernetes also.

Security Response

If you’ve found a security issue that you’d like to disclose confidentially please contact the Red Hat Product Security team.

The UnifiedPush Operator is licensed under the Apache License, Version 2.0 License, and is subject to the AeroGear Export Policy.

Contributing

All contributions are hugely appreciated. Please see our Contributing Guide for guidelines on how to open issues and pull requests. Please check out our Code of Conduct too.

Questions

There are a number of ways you can get in in touch with us, please see the AeroGear community.

About

No description, website, or topics provided.

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published