...
 
Commits (47)
......@@ -35,3 +35,4 @@ prime/*
stage/*
snap/.snapcraft/*
go.sum
squashfs-root/
......@@ -12,20 +12,21 @@ To build a device service using the EdgeX C SDK you'll need the following:
* libmicrohttpd
* libcurl
* libyaml
* libcbor
You can install these on Ubuntu by running::
sudo apt install libcurl4-openssl-dev libmicrohttpd-dev libyaml-dev
sudo apt install libcurl4-openssl-dev libmicrohttpd-dev libyaml-dev libcbor-dev
===============================
Get the EdgeX Device SDK for C
===============================
The next step is to download and build the EdgeX Device SDK for C. You always want to use the release of the SDK that matches the release of EdgeX you are targeting. As of this writing the `delhi` release is the current stable release of EdgeX, so we will be using the `delhi` branch of the C SDK.
The next step is to download and build the EdgeX Device SDK for C. You always want to use the release of the SDK that matches the release of EdgeX you are targeting. As of this writing the `edinburgh` release is the current stable release of EdgeX, so we will be using the `edinburgh` branch of the C SDK.
#. First, clone the delhi branch of device-sdk-c from Github::
#. First, clone the edinburgh branch of device-sdk-c from Github::
git clone -b delhi https://github.com/edgexfoundry/device-sdk-c.git
git clone -b edinburgh https://github.com/edgexfoundry/device-sdk-c.git
cd ./device-sdk-c
#. Then, build the device-sdk-c::
......@@ -40,15 +41,12 @@ Starting a new Device Service project
For this guide we're going to use the example template provided by the C SDK as a starting point, and will modify it to generate random integer values.
#. Begin by copying the contents of src/c/examples/ into a new directory named `example-device-c`::
#. Begin by copying the template example source into a new directory named `example-device-c`::
cp -r ./src/c/examples ../example-device-c
mkdir -p ../example-device-c/res
cp ./src/c/examples/template.c ../example-device-c
cd ../example-device-c
#. You can delete the CMakeLists.txt as we don't need it anymore::
rm CMakeLists.txt
=========================
Build your Device Service
......@@ -58,25 +56,28 @@ Now you are ready to build your new device service using the C SDK you compiled
#. Tell the compiler where to find the C SDK files::
export CSDK_DIR=../device-sdk-c/build/release/_CPack_Packages/Linux/TGZ/csdk-0.7.2
export CSDK_DIR=../device-sdk-c/build/release/_CPack_Packages/Linux/TGZ/csdk-1.0.0
.. note:: The exact path to your compiled CSDK_DIR may differ, depending on the tagged version number on the SDK
#. Now you can build your device service executable::
make
gcc -I$CSDK_DIR/include -L$CSDK_DIR/lib -o device-example-c template.c -lcsdk
=============================
Customize your Device Service
=============================
Up to now you've been building the example device service provided by the C SDK. In order to change it to a device service that generates random numbers, you need to modify your `template.c` method **template_get_handler** so that the **while** block looks like this:
Up to now you've been building the example device service provided by the C SDK. In order to change it to a device service that generates random numbers, you need to modify your `template.c` method **template_get_handler** so that it reads as follows:
.. code-block:: c
:linenos:
:lineno-start: 92
:emphasize-lines: 3,8,11,14
:lineno-start: 97
:emphasize-lines: 7,12,15,18
for (uint32_t i = 0; i < nreadings; i++)
{
const edgex_nvpairs * current = requests[i].attributes;
while (current!=NULL)
{
if (strcmp (current->name, "type") ==0 )
......@@ -92,6 +93,8 @@ Up to now you've been building the example device service provided by the C SDK.
}
current = current->next;
}
}
return true;
============================
......@@ -100,9 +103,9 @@ Creating your Device Profile
A Device Profile is a YAML file that describes a class of device to EdgeX. General characteristics about the type of device, the data these devices provide, and how to command the device is all provided in a Device Profile. Device Services use the Device Profile to understand what data is being collected from the Device (in some cases providing information used by the Device Service to know how to communicate with the device and get the desired sensor readings). A Device Profile is needed to describe the data that will be collected from the simple random number generating Device Service.
#. Explore the files in the src/c/examples/res folder. Take note of the example Device Profile YAML file that is already there (ExampleProfile.yml). You can explore the contents of this file to see how devices are represented by YAML. In particular, note how fields or properties of a sensor are represented by “deviceResources”. Commands to be issued to the device are represented by “commands”.
#. Explore the files in the src/c/examples/res folder. Take note of the example Device Profile YAML file that is already there (TemplateProfile.yaml). You can explore the contents of this file to see how devices are represented by YAML. In particular, note how fields or properties of a sensor are represented by “deviceResources”. Commands to be issued to the device are represented by “coreCommands”.
#. Download this :download:`random-generator-device.yaml <random-generator-device.yaml>` into the ./res folder.
#. Download this :download:`random-generator-device.yaml <random-generator-device.yaml>` into the ./res folder.
You can open random-generator-device.yaml in a text editor. In this Device Profile, you are suggesting that the device you are describing to EdgeX has a single property (or deviceResource) which EdgeX should know about - in this case, the property is the “randomnumber”. Note how the deviceResource is typed.
......@@ -116,7 +119,7 @@ Configuring your Device Service
You will now update the configuration for your new Device Service – changing the port it operates on (so as not to conflict with other Device Services), altering the scheduled times of when the data is collected from the Device Service (every 10 seconds), and setting up the initial provisioning of the random number generating device when the service starts.
* Downlod this :download:`configuration.toml <configuration.toml>` to the ./res folder (this will overwrite an existing file – that’s ok).
* Download this :download:`configuration.toml <configuration.toml>` to the ./res folder.
If you will be running EdgeX inside of Docker containers (which you will at the bottom of this guide) you need to tell your new Device Service to listen on the Docker host IP address (172.17.0.1) instead of **localhost**. To do that, modify the configuration.toml file so that the top section looks like this:
......@@ -137,7 +140,7 @@ Now you have your new Device Service, modified to return a random number, a Devi
#. Rebuild your Device Service to reflect the changes that you have made::
make
gcc -I$CSDK_DIR/include -L$CSDK_DIR/lib -o device-example-c template.c -lcsdk
=======================
......@@ -162,7 +165,12 @@ Allow your newly created Device Service, which was formed out of the Device Serv
docker logs -f edgex-core-data
Which would print an Event record every time your Device Service is called. Note that the value of the "randomnumber" reading is an integer between 0 and 100::
Which would print an Event record every time your Device Service is called.
#. You can manually generate an event using curl to query the device service directly::
curl 0:49992/api/v1/device/name/RandNum-Device01/Random
Note that the value of the "randomnumber" reading is an integer between 0 and 100::
INFO: 2019/02/05 20:27:05 Posting Event: {"id":"","pushed":0,"device":"RandNum-Device01","created":0,"modified":0,"origin":1549398425000,"schedule":null,"event":null,"readings":[{"id":"","pushed":0,"created":0,"origin":0,"modified":0,"device":null,"name":"randomnumber","value":"63"}]}
INFO: 2019/02/05 20:27:05 Putting event on message queue
{"device":"RandNum-Device01","origin":1559317102457,"readings":[{"name":"randomnumber","value":"63"}]}
......@@ -52,25 +52,17 @@ EnableRemote = false
File = "./device-simple.log"
Level = "DEBUG"
# Pre-define Schedule Configuration
[[Schedules]]
Name = "10sec-schedule"
Frequency = "PT10S"
[[ScheduleEvents]]
Name = "readRandom"
Schedule = "10sec-schedule"
[ScheduleEvents.Addressable]
HTTPMethod = "GET"
Path = "/api/v1/device/name/RandNum-Device01/Random"
# Pre-define Devices
[[DeviceList]]
Name = "RandNum-Device01"
Profile = "RandNum-Device"
Description = "Random Number Generator Device"
Labels = [ "random", "test" ]
[DeviceList.Addressable]
Address = "random"
Port = 300
Protocol = "OTHER"
\ No newline at end of file
[DeviceList.Protocols]
[DeviceList.Protocols.Other]
Address = "random"
Port = 300
[[DeviceList.AutoEvents]]
Resource = "Random"
OnChange = false
Frequency = "10s"
......@@ -14,18 +14,18 @@ deviceResources:
{ type: "random" }
properties:
value:
{ type: "INT32", readWrite: "R", defaultValue: "0.00", minimum: "0.00", maximum: "100.00" }
{ type: "Int32", readWrite: "R", minimum: "0.00", maximum: "100.00" }
units:
{ type: "String", readWrite: "R", defaultValue: "" }
resources:
deviceCommands:
-
name: "Random"
get:
-
{ operation: "get", object: "randomnumber", property: "value", parameter: "Random" }
commands:
coreCommands:
-
name: "Random"
get:
......@@ -38,4 +38,4 @@ commands:
-
code: "503"
description: "service unavailable"
expectedValues: []
\ No newline at end of file
expectedValues: []
......@@ -362,17 +362,19 @@ func deleteDeviceProfile(dp models.DeviceProfile, w http.ResponseWriter) error {
http.Error(w, err.Error(), http.StatusConflict)
return err
}
// Delete the profile
if err := dbClient.DeleteDeviceProfileById(dp.Id); err != nil {
http.Error(w, err.Error(), http.StatusServiceUnavailable)
return err
}
for _, command := range dp.CoreCommands {
if err := dbClient.DeleteCommandById(command.Id); err != nil {
http.Error(w, err.Error(), http.StatusServiceUnavailable)
return err
}
}
// Delete the profile
if err := dbClient.DeleteDeviceProfileById(dp.Id); err != nil {
http.Error(w, err.Error(), http.StatusServiceUnavailable)
return err
}
return nil
}
......
......@@ -18,7 +18,6 @@ import (
"strconv"
"github.com/edgexfoundry/edgex-go/internal/pkg/db"
dataBase "github.com/edgexfoundry/edgex-go/internal/pkg/db"
contract "github.com/edgexfoundry/go-mod-core-contracts/models"
"github.com/gomodule/redigo/redis"
"github.com/google/uuid"
......@@ -256,15 +255,7 @@ func (c *Client) GetAllDevices() ([]contract.Device, error) {
}
func (c *Client) GetDevicesByProfileId(id string) ([]contract.Device, error) {
d, err := c.getDevicesByValue(db.Device + ":profile:" + id)
// XXX This is here only because test/db_metadata.go is inconsistent when testing for _not found_. It
// should always be checking for database.ErrNotFound but too often it is checking for nil
if len(d) == 0 {
err = dataBase.ErrNotFound
}
return d, err
return c.getDevicesByValue(db.Device + ":profile:" + id)
}
func (c *Client) GetDeviceById(id string) (contract.Device, error) {
......@@ -286,15 +277,7 @@ func (c *Client) GetDeviceByName(n string) (contract.Device, error) {
}
func (c *Client) GetDevicesByServiceId(id string) ([]contract.Device, error) {
d, err := c.getDevicesByValue(db.Device + ":service:" + id)
// XXX This is here only because test/db_metadata.go is inconsistent when testing for _not found_. It
// should always be checking for database.ErrNotFound but too often it is checking for nil
if len(d) == 0 {
err = dataBase.ErrNotFound
}
return d, err
return c.getDevicesByValue(db.Device + ":service:" + id)
}
func (c *Client) GetDevicesWithLabel(l string) ([]contract.Device, error) {
......@@ -439,15 +422,7 @@ func (c *Client) GetDeviceProfileByName(n string) (contract.DeviceProfile, error
}
func (c *Client) GetDeviceProfilesByCommandId(id string) ([]contract.DeviceProfile, error) {
dp, err := c.getDeviceProfilesByValues(db.DeviceProfile + ":command:" + id)
// XXX This is here only because test/db_metadata.go is inconsistent when testing for _not found_. It
// should always be checking for database.ErrNotFound but too often it is checking for nil
if len(dp) == 0 {
err = dataBase.ErrNotFound
}
return dp, err
return c.getDeviceProfilesByValues(db.DeviceProfile + ":command:" + id)
}
// Get device profiles with the passed query
......@@ -557,7 +532,7 @@ func deleteDeviceProfile(conn redis.Conn, id string) error {
}
dp := contract.DeviceProfile{}
_ = unmarshalObject(object, &dp)
_ = unmarshalDeviceProfile(object, &dp)
_ = conn.Send("MULTI")
_ = conn.Send("DEL", id)
......@@ -798,13 +773,6 @@ func (c *Client) GetDeviceServicesByAddressableId(id string) ([]contract.DeviceS
return []contract.DeviceService{}, err
}
// XXX This should really return an ErrNotFound. It's not to be consistent with existing code
// assumptions
//
// if len(objects) == 0 {
// return []contract.DeviceService{}, dataBase.ErrNotFound
// }
d := make([]contract.DeviceService, len(objects))
for i, object := range objects {
err = unmarshalDeviceService(object, &d[i])
......@@ -988,27 +956,11 @@ func (c *Client) GetProvisionWatchersByIdentifier(k string, v string) (pw []cont
}
func (c *Client) GetProvisionWatchersByServiceId(id string) ([]contract.ProvisionWatcher, error) {
pw, err := c.getProvisionWatchersByValue(db.ProvisionWatcher + ":service:" + id)
// XXX This is here only because test/db_metadata.go is inconsistent when testing for _not found_. It
// should always be checking for database.ErrNotFound but too often it is checking for nil
if len(pw) == 0 {
err = dataBase.ErrNotFound
}
return pw, err
return c.getProvisionWatchersByValue(db.ProvisionWatcher + ":service:" + id)
}
func (c *Client) GetProvisionWatchersByProfileId(id string) ([]contract.ProvisionWatcher, error) {
pw, err := c.getProvisionWatchersByValue(db.ProvisionWatcher + ":profile:" + id)
// XXX This is here only because test/db_metadata.go is inconsistent when testing for _not found_. It
// should always be checking for database.ErrNotFound but too often it is checking for nil
if len(pw) == 0 {
err = dataBase.ErrNotFound
}
return pw, err
return c.getProvisionWatchersByValue(db.ProvisionWatcher + ":profile:" + id)
}
func (c *Client) GetProvisionWatcherById(id string) (contract.ProvisionWatcher, error) {
......
......@@ -877,7 +877,7 @@ func testDBDeviceProfile(t *testing.T, db interfaces.DBClient) {
}
deviceProfiles, err = db.GetDeviceProfilesByCommandId(uuid.New().String())
if err != dataBase.ErrNotFound {
if (err != nil && err != dataBase.ErrNotFound) || len(deviceProfiles) != 0 {
t.Fatalf("Error getting deviceProfiles %v", err)
}
if len(deviceProfiles) != 0 {
......@@ -972,7 +972,7 @@ func testDBDevice(t *testing.T, db interfaces.DBClient) {
}
devices, err = db.GetDevicesByProfileId(uuid.New().String())
if err != dataBase.ErrNotFound {
if (err != nil && err != dataBase.ErrNotFound) || len(devices) != 0 {
t.Fatalf("Error getting devices %v", err)
}
if len(devices) != 0 {
......@@ -988,7 +988,7 @@ func testDBDevice(t *testing.T, db interfaces.DBClient) {
}
devices, err = db.GetDevicesByServiceId(uuid.New().String())
if err != dataBase.ErrNotFound {
if (err != nil && err != dataBase.ErrNotFound) || len(devices) != 0 {
t.Fatalf("Error getting devices %v", err)
}
if len(devices) != 0 {
......@@ -1095,7 +1095,7 @@ func testDBProvisionWatcher(t *testing.T, db interfaces.DBClient) {
}
provisionWatchers, err = db.GetProvisionWatchersByServiceId(uuid.New().String())
if err != dataBase.ErrNotFound {
if (err != nil && err != dataBase.ErrNotFound) || len(provisionWatchers) != 0 {
t.Fatalf("Error getting provisionWatchers %v", err)
}
if len(provisionWatchers) != 0 {
......@@ -1111,7 +1111,7 @@ func testDBProvisionWatcher(t *testing.T, db interfaces.DBClient) {
}
provisionWatchers, err = db.GetProvisionWatchersByProfileId(uuid.New().String())
if err != dataBase.ErrNotFound {
if (err != nil && err != dataBase.ErrNotFound) || len(provisionWatchers) != 0 {
t.Fatalf("Error getting provisionWatchers %v", err)
}
if len(provisionWatchers) != 0 {
......
FROM ubuntu:16.04
# allow specifying the architecture from the build arg command line
ARG ARCH
# this is essentially the same as the upstream dockerfile
# here: https://github.com/snapcore/snapcraft/blob/master/docker/stable.Dockerfile
# except we also specify the architecture to download so that this works
# on other architectures
# basically, we send a command to the snap store for the info on the core +
# snapcraft snaps, extract the download link from the result and
# download and extract the snaps into the docker container
# we do this because we can't easily run snapd (and thus snaps) inside a docker
# container without disabling important security protections enabled for
# docker containers
RUN apt-get update && \
apt-get dist-upgrade --yes && \
apt-get install --yes \
curl sudo jq squashfs-tools && \
curl -s -L $(curl -s -H 'X-Ubuntu-Series: 16' -H "X-Ubuntu-Architecture: $ARCH" 'https://api.snapcraft.io/api/v1/snaps/details/core' | jq '.download_url' -r) --output core.snap && \
mkdir -p /snap/core && unsquashfs -n -d /snap/core/current core.snap && rm core.snap && \
curl -s -L $(curl -s -H 'X-Ubuntu-Series: 16' -H "X-Ubuntu-Architecture: $ARCH" 'https://api.snapcraft.io/api/v1/snaps/details/snapcraft' | jq '.download_url' -r) --output snapcraft.snap && \
mkdir -p /snap/snapcraft && unsquashfs -n -d /snap/snapcraft/current snapcraft.snap && rm snapcraft.snap && \
apt remove --yes --purge curl jq squashfs-tools && \
apt-get autoclean --yes && \
apt-get clean --yes
# the upstream dockerfile just uses this file locally from the repo, but
# rather than copy that file here, we can just download it here
# while unlikely it is possible that the file location could move in the git repo
# on master branch, so for stability in our builds, we just hard-code the git
# commit that most recently updated this file as the revision to download from
# if this ever breaks, just change this file to copy what the upstream master dockerfile does
ADD https://raw.githubusercontent.com/snapcore/snapcraft/25043ab3667d24688b3d93dcac9f9a74f35dae9e/docker/bin/snapcraft-wrapper /snap/bin/snapcraft
RUN sed -i -e "s@\"amd64\"@$ARCH@" /snap/bin/snapcraft && chmod +x /snap/bin/snapcraft
# snapcraft will be in /snap/bin, so we need to put that on the $PATH
ENV PATH=/snap/bin:$PATH
# include all of the build context inside /build
COPY . /build
# run the entrypoint.sh script to actually perform the build when the container is run
WORKDIR /build
ENTRYPOINT [ "/build/snap/entrypoint.sh" ]
此差异已折叠。
#!/bin/bash
# get the directory of this script
# snippet from https://stackoverflow.com/a/246128/10102404
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null && pwd )"
# get the git root, which is one directory up from this script
GIT_ROOT=$(readlink -f ${SCRIPT_DIR}/..)
# if we are running inside a jenkins instance then copy the login file
# and check if this is a release job
if [ ! -z "$JENKINS_URL" ]; then
if [ -f $HOME/EdgeX ]; then
cp $HOME/EdgeX $GIT_ROOT/edgex-snap-store-login
else
echo "I seem to be running on Jenkins, but there's not a snap store login file..."
fi
# figure out what kind of job this is using $JOB_NAME and simplify that
# into $JOB_TYPE
JOB_TYPE="build"
if [[ "$JOB_NAME" =~ edgex-go-snap-.*-stage-snap.* ]]; then
JOB_TYPE="stage"
elif [[ "$JOB_NAME" =~ edgex-go-snap-release-snap ]]; then
JOB_TYPE="release"
fi
fi
# build the container image - providing the relevant architecture we're on
# to determine which snap arch to download in the docker container
case $(arch) in
x86_64)
arch="amd64";;
aarch64)
arch="arm64";;
arm*)
arch="armhf";;
esac
docker build -t edgex-snap-builder:latest -f ${SCRIPT_DIR}/Dockerfile.build --build-arg ARCH="$arch" $GIT_ROOT
# delete the login file we copied to the git root so it doesn't persist around
rm $GIT_ROOT/edgex-snap-store-login
# now run the build with the environment variables
docker run --rm -e "JOB_TYPE=$JOB_TYPE" -e "SNAP_REVISION=$SNAP_REVISION" -e "SNAP_CHANNEL=$SNAP_CHANNEL" edgex-snap-builder:latest
# note that we don't need to delete the docker images here, that's done for us by jenkins in the
# edgex-provide-docker-cleanup macro defined for all the snap jobs
#!/bin/bash -e
# Required by click.
export LC_ALL=C.UTF-8
export SNAPCRAFT_SETUP_CORE=1
# this tells snapcraft to include a manifest file in the snap
# detailing which packages were used to build the snap
export SNAPCRAFT_BUILD_INFO=1
# if snapcraft ever encounters any bugs, we should force it to
# auto-report silently rather than attempt to ask for permission
# to send a report
export SNAPCRAFT_ENABLE_SILENT_REPORT=1
# clean the environment and build the snap
build_snap()
{
pushd /build > /dev/null
snapcraft clean
snapcraft
popd > /dev/null
}
# login to the snap store using the provided login macaroon file
snapcraft_login()
{
snapcraft login --with /build/edgex-snap-store-login
}
# release a locally build snap to the store
release_local_snap()
{
pushd /build > /dev/null
snapcraft_login
# push the snap up to the store and get the revision of the snap
REVISION=$(snapcraft push edgexfoundry*.snap | awk '/Revision/ {print $2}')
# now release it on the provided revision and snap channel
snapcraft release edgexfoundry $REVISION $SNAP_CHANNEL
# also update the meta-data automatically
snapcraft push-metadata edgexfoundry*.snap --force
popd > /dev/null
}
# release a snap revision already in the store
release_store_snap()
{
snapcraft_login
snapcraft release edgexfoundry $SNAP_REVISION $SNAP_CHANNEL
}
case "$JOB_TYPE" in
"stage")
# stage jobs build the snap locally and release it
build_snap
release_local_snap
;;
"release")
# release jobs will promote an already built snap revision
# in the store to a channel
release_store_snap
;;
*)
# do normal build and nothing else
build_snap
;;
esac
......@@ -8,6 +8,7 @@ ALL_SERVICES=""
ALL_SERVICES="$ALL_SERVICES consul"
ALL_SERVICES="$ALL_SERVICES mongod"
ALL_SERVICES="$ALL_SERVICES mongo-worker"
ALL_SERVICES="$ALL_SERVICES redis"
# core services
ALL_SERVICES="$ALL_SERVICES core-data"
......@@ -26,23 +27,26 @@ ALL_SERVICES="$ALL_SERVICES export-distro"
ALL_SERVICES="$ALL_SERVICES export-client"
# device services
ALL_SERVICES="$ALL_SERVICES device-modbus"
ALL_SERVICES="$ALL_SERVICES device-mqtt"
ALL_SERVICES="$ALL_SERVICES device-random"
ALL_SERVICES="$ALL_SERVICES device-virtual"
# security services
ALL_SERVICES="$ALL_SERVICES security-secret-store"
ALL_SERVICES="$ALL_SERVICES security-api-gateway"
# handle_svc will either turn a service off or on
# handle_svc will either turn a service off or on and set the associated
# config item
# first arg is the service, second is the state to put it in
handle_svc () {
case "$2" in
"off")
snapctl stop --disable edgexfoundry.$1;;
snapctl stop --disable "$SNAP_NAME.$1"
snapctl set "$1"=off
;;
"on")
snapctl start --enable edgexfoundry.$1;;
snapctl start --enable "$SNAP_NAME.$1"
snapctl set "$1"=on
;;
"")
# no setting for it, ignore and continue
;;
......@@ -54,23 +58,42 @@ handle_svc () {
for key in $ALL_SERVICES; do
# get the config key for the service
status=$(snapctl get $key)
status=$(snapctl get "$key")
case $key in
device*)
# the device services are all using the device-sdk-go which waits
# for core-data and core-metadata to come online, so if we are
# enabling a device service, we should also enable those services
if [ "$status" = "on" ]; then
handle_svc "core-data" "on"
handle_svc "core-metadata" "on"
fi
# handle the service too
handle_svc "$key" "$status"
;;
mongo-worker|edgexproxy|vault-worker)
# it doesn't make any sense to disable the *-worker daemons since
# they are just oneshot daemons that run after another daemon, so
# they are just oneshot daemons that run after other daemons, so
# just ignore this request
;;
security-api-gateway)
# the security-api-gateway consists of the following services:
# the security-api-gateway consists of the following base services
# - kong
# - cassandra (because kong requires it)
# - edgexproxy
handle_svc "cassandra" "$status"
handle_svc "kong-daemon" "$status"
handle_svc "edgexproxy" "$status"
# additionally, the secrity-api-gateway needs to use the following
# services:
# - vault (because edgexproxy will access/store secrets in vault)
# - vault-worker
handle_svc "cassandra" $status
handle_svc "kong-daemon" $status
handle_svc "edgexproxy" $status
# so if we are turning the security-api-gateway on, then turn
# those services on too
if [ "$status" = "on" ]; then
handle_svc "vault" "on"
handle_svc "vault-worker" "on"
fi
;;
security-secret-store)
# the security-api-gateway consists of the following services:
......@@ -84,14 +107,112 @@ for key in $ALL_SERVICES; do
handle_svc "kong-daemon" "off"
handle_svc "edgexproxy" "off"
fi
handle_svc "vault" $status
handle_svc "vault-worker" $status
handle_svc "vault" "$status"
handle_svc "vault-worker" "$status"
;;
*)
# default case for all other services just enable/disable the service using
# snapd/systemd
# if the service is meant to be off, then disable it
handle_svc $key $status
handle_svc "$key" "$status"
;;
esac
done
# handle usage of database provider
dbProvider=$(snapctl get dbtype)
PREV_DB_PROVIDER_FILE="$SNAP_DATA/prevdbtype"
if [ -f "$PREV_DB_PROVIDER_FILE" ]; then
echo "" > "$PREV_DB_PROVIDER_FILE"
fi
case "$dbProvider" in
"")
# not set, don't do anything
;;
"redis")
# dbtype is set to redis, see what the previous value of dbtype was
prevDBtype=$(cat "$PREV_DB_PROVIDER_FILE")
if [ "$prevDBtype" != "redis" ]; then
# change from previous database provider to redis
# first change the configuration.toml files for the following
# services:
# * core-data
# * core-metadata
# * export-client
# * support-notifications
# * support-scheduler
for svc in core-data core-metadata export-client support-notifications support-scheduler; do
configFile=$SNAP_DATA/config/$svc/res/configuration.toml
toml2json --preserve-key-order "$configFile" | \
jq -r '.Databases.Primary.Type = "redisdb" | .Databases.Primary.Port = 6379' | \
json2toml --preserve-key-order > "$configFile.tmp"
mv "$configFile.tmp" "$configFile"
done
# we also need to ensure that the support-logging is set to
# use file based logging persistence
configFile=$SNAP_DATA/config/support-logging/res/configuration.toml
toml2json --preserve-key-order "$configFile" | \
jq -r '.Writable.Persistence = "file"' | \
json2toml --preserve-key-order > "$configFile.tmp"
mv "$configFile.tmp" "$configFile"
# turn mongod and mongo-worker off
handle_svc "mongod" "off"
handle_svc "mongo-worker" "off"
# turn redis on
handle_svc "redis" "on"
# since the configuration files have been modified, those
# changes need to be progpogated into relevant services, so
# restart the services with updated configuration if they were
# already running
# note support-logging is here too due to the file Persistence
"$SNAP/bin/push-config.sh" core-data core-metadata export-client support-notifications support-scheduler support-logging
fi
# save the database provider as redis for next invocation
echo "redis" > "$PREV_DB_PROVIDER_FILE"
;;
"mongodb")
# dbtype is set to mongodb, see what the previous value of dbtype was
prevDBtype=$(cat "$PREV_DB_PROVIDER_FILE")
if [ "$prevDBtype" != "mongodb" ]; then
# change from previous database provider to mongodb
# first change the configuration.toml files for the following
# services:
# * core-data
# * core-metadata
# * export-client
# * support-notifications
# * support-scheduler
for svc in core-data core-metadata export-client support-notifications support-scheduler; do
configFile=$SNAP_DATA/config/$svc/res/configuration.toml
toml2json --preserve-key-order "$configFile" | \
jq -r '.Databases.Primary.Type = "mongodb" | .Databases.Primary.Port = 27017' | \
json2toml --preserve-key-order > "$configFile.tmp"
mv "$configFile.tmp" "$configFile"
done
# turn redis off
handle_svc "redis" "off"
# turn mongod and mongo-worker on
handle_svc "mongod" "on"
handle_svc "mongo-worker" "on"
# since the configuration files have been modified, those
# changes need to be progpogated into relevant services, so
# restart the services with updated configuration if they were
# already running
"$SNAP/bin/push-config.sh" core-data core-metadata export-client support-notifications support-scheduler
fi
# save the database provider as mongodb for next invocation
echo "mongodb" > "$PREV_DB_PROVIDER_FILE"
;;
*)
echo "invalid setting for dbtype: $dbProvider"
exit 1
;;
esac
此差异已折叠。
#!/bin/bash -e
# save this revision for when we run again in the post-refresh
snapctl set lastrev="$SNAP_REVISION"
snapctl set release="edinburgh"
#!/bin/bash
# example usage:
# $ gopartbootstrap github.com/edgexfoundry/edgex-go
gopartbootstrap()
{
# first set the GOPATH to be in the current directory and in ".gopath"
export GOPATH="$(pwd)/.gopath"
GOPATH="$(pwd)/.gopath"
export GOPATH
# setup path to include both $SNAPCRAFT_STAGE/bin and $GOPATH/bin
# the former is for the go tools, as well as things like glide, etc.
# while the later is for govendor, etc. and other go tools that might need to be installed
export PATH="$SNAPCRAFT_STAGE/bin:$GOPATH/bin:$PATH"
# set GOROOT to be whatever the go tool from SNAPCRAFT_STAGE/bin is
export GOROOT=$(go env GOROOT)
GOROOT=$(go env GOROOT)
export GOROOT
# now setup the GOPATH for this part using the import path
export GOIMPORTPATH="$GOPATH/src/$1"
mkdir -p $GOIMPORTPATH
mkdir -p "$GOIMPORTPATH"
# note that some tools such as govendor don't work well with symbolic links, so while it's unfortunate
# we have to copy all this it's a necessary evil at the moment...
# but note that we do ignore all files that start with "." with the "./*" pattern
cp -r ./* $GOIMPORTPATH
cp -r ./* "$GOIMPORTPATH"
# finally go into the go import path to prepare for building
cd $GOIMPORTPATH
cd "$GOIMPORTPATH" || exit
}
......@@ -20,7 +20,7 @@ else
CLASSPATH=$CASSANDRA_CONF
fi
for jar in $SNAP/usr/share/cassandra/lib/*.jar; do
for jar in "$SNAP/usr/share/cassandra/lib"/*.jar; do
CLASSPATH="$CLASSPATH:$jar"
done
......@@ -47,7 +47,8 @@ export cassandra_storagedir="$CASSANDRA_DATA"
export JVM_OPTS="$JVM_OPTS -Dcassandra.config=file://$CASSANDRA_CONF/cassandra.yaml"
# set JAVA_HOME
export JAVA_HOME=$(ls -d $SNAP/usr/lib/jvm/java-1.8.0-openjdk-*)
JAVA_HOME=$(ls -d "$SNAP/usr/lib/jvm"/java-1.8.0-openjdk-*)
export JAVA_HOME
# The -x bit isn't set on cassandra
/bin/sh $SNAP/usr/sbin/cassandra -R -p "$CASSANDRA_HOME/cassandra.pid"
/bin/sh "$SNAP/usr/sbin/cassandra" -R -p "$CASSANDRA_HOME/cassandra.pid"
#!/bin/bash -e
export SEC_API_GATEWAY_CONFIG_DIR=${SNAP_DATA}/config/security-api-gateway
export SEC_API_GATEWAY_CONFIG_DIR=$SNAP_DATA/config/security-api-gateway
cd ${SEC_API_GATEWAY_CONFIG_DIR}
$SNAP/bin/edgexproxy --configfile=${SEC_API_GATEWAY_CONFIG_DIR}/res/configuration.toml --init=true
cd "$SEC_API_GATEWAY_CONFIG_DIR"
"$SNAP/bin/edgexproxy" --configfile="$SEC_API_GATEWAY_CONFIG_DIR/res/configuration.toml" --init=true
#!/bin/bash
# launch whatever arguments were provided in the background
"$@" &
#!/bin/bash -e
# wait for consul to come up
$SNAP/bin/wait-for-consul.sh $1
# if we get here, just assume that consul is up, otherwise this
# service will fail after 10 seconds
cd $SNAP_DATA/config/$1
# note we have to exec the process so it takes over the
# same pid as the calling bash process since this bash script
# is forked from another script that systemd runs
# this ensures that systemd will end up tracking the actual go
# service process and not the shell process
exec $SNAP/bin/$1 "${@:2}"
#!/bin/bash -e
# the kong wrapper script from $SNAP
export KONG_SNAP="${SNAP}/bin/kong-wrapper.sh"
export num_tries=0
export MAX_KONG_UP_TRIES=10
export KONG_SNAP="$SNAP/bin/kong-wrapper.sh"
# run kong migrations up to bootstrap the cassandra database
# note that sometimes cassandra can be in a "starting up" state, etc.
# and in this case we should just loop and keep trying
until $KONG_SNAP migrations up --yes --conf $KONG_CONF; do
sleep 10
# increment number of tries
num_tries=$((num_tries+1))
if (( num_tries > MAX_KONG_UP_TRIES )); then
echo "max tries attempting to bring up kong"
exit 1
fi
# we don't implement a timeout here because systemd will kill us if we
# don't succeed in 15 minutes (or whatever the configured stop-timeout is)
until $KONG_SNAP migrations up --yes --conf "$KONG_CONF"; do
sleep 5
done
# now start kong normally
$KONG_SNAP start --conf $KONG_CONF
$KONG_SNAP start --conf "$KONG_CONF"
#!/bin/bash
# stop kong
$SNAP/bin/kong-wrapper.sh stop -p $SNAP_DATA/kong
"$SNAP/bin/kong-wrapper.sh" stop -p "$SNAP_DATA/kong"
# in some cases stopping kong doesn't succeed properly, so to ensure that
# it always is put into a state where it can startup, just remove the env
# file in case it somehow still exists, then the next invocation of kong
# will always be able to start
rm -f $SNAP_DATA/kong/.kong_env
rm -f "$SNAP_DATA/kong/.kong_env"
......@@ -23,16 +23,21 @@ case $SNAP_ARCH in
;;
esac
# vars that make perl warnings go away
export LC_ALL=C.UTF-8
export LANG=C.UTF-8
# get the perl version
PERL_VERSION=$(perl -version | grep -Po '\(v\K([^\)]*)')
# perl lib paths are needed for some rocks that kong loads through luarocks dependencies
export PERL5LIB="$SNAP/usr/local/lib/$archLibName/perl/5.22.1:$SNAP/usr/local/share/perl/5.22.1:$SNAP/usr/lib/$archLibName/perl5/5.22:$SNAP/usr/share/perl5:$SNAP/usr/lib/$archLibName/perl/5.22:$SNAP/usr/share/perl/5.22:$SNAP/usr/local/lib/site_perl:$SNAP/usr/lib/$archLibName/perl-base"
PERL5LIB="$PERL5LIB:$SNAP/usr/lib/$archLibName/perl/$PERL_VERSION"
PERL5LIB="$PERL5LIB:$SNAP/usr/share/perl/$PERL_VERSION"
export PERL5LIB
# lua paths so that luarocks can work
export LUA_VERSION=5.1
export LUA_PATH="$SNAP/lualib/?.lua;$SNAP/lualib/?/init.lua;$SNAP/usr/share/lua/$LUA_VERSION/?.lua;$SNAP/usr/share/lua/$LUA_VERSION/?/init.lua;$SNAP/lib/lua/$LUA_VERSION/?.lua;$SNAP/lib/lua/$LUA_VERSION/?/init.lua;$SNAP/share/lua/$LUA_VERSION/?.lua;$SNAP/share/lua/$LUA_VERSION/?/init.lua;;"
export LUA_CPATH="$SNAP/lualib/?.so;$SNAP/lib/lua/$LUA_VERSION/?.so;$SNAP/lib/$archLibName/lua/$LUA_VERSION/?.so;;"
# vars that make perl warnings go away
export LC_ALL=C.UTF-8
export LANG=C.UTF-8
exec "$SNAP/bin/kong" "$@"
"$SNAP/bin/kong" "$@"
......@@ -3,7 +3,7 @@
# try to initialize mongo, giving up after a reasonable number of tries
MAX_TRIES=10
num_tries=0
until mongo $SNAP/mongo/init_mongo.js; do
until mongo "$SNAP/mongo/init_mongo.js"; do
sleep 5
# increment number of tries
num_tries=$((num_tries+1))
......
#!/bin/bash -e
# check the mongo database path
MONGO_DATA_DIR="$SNAP_DATA"/mongo/db
if [ ! -e "$MONGO_DATA_DIR" ] ; then
mkdir -p "$MONGO_DATA_DIR"
fi
# now start up mongo
$SNAP/bin/mongod --dbpath $MONGO_DATA_DIR --logpath $SNAP_COMMON/mongodb.log --smallfiles
#!/bin/bash -e
# push the configuration files into consul
"$SNAP/bin/config-seed" \
--cmd "$SNAP_DATA/config" \
-confdir "$SNAP_DATA/config/config-seed/res" \
--props "$SNAP_DATA/config/config-seed/res/properties" \
-overwrite
# if no arguments were provided, then restart all services that are currently
# running
if [ $# -eq 0 ]; then
# restart all active edgex services to ensure that they pick up their new
# configuration from consul
# for now, limit outselves to the core-*, export-*, support-*, device-*,
# sys-mgmt-agent, and security-service helper services
# this means if a user changes i.e. kong configuration they will need to
# restart kong-daemon manually
# TODO: maybe implement some kind of file hashing to determine which services
# had their configs changed and only restart changed services?
for svc in $(snapctl services | grep "core-*\|export-*\|support-*\|sys-mgmt-agent\|device-*\|vault-worker\|edgexproxy" | grep -v inactive | grep active | awk '{print $1}'); do
snapctl restart "$svc"
done
fi
# otherwise restart the args provided, assuming they are all names of
# services in the snap
set +e
for svc in "$@"; do
# check if it's a known service - if not fail
SNAP_NAME_SVC="$SNAP_NAME.$svc"
if ! snapctl services | grep -q "$SNAP_NAME_SVC" ; then
echo "unknown service \"$svc\""
exit 1
fi
# check if it's running - if so restart
if snapctl services | grep "$SNAP_NAME_SVC" | grep -q -v inactive; then
snapctl restart "$SNAP_NAME_SVC"
fi
done
#!/bin/sh
#
# This script includes some configuration copied from the
# core-config-seed's Dockerfile, and is otherwise based
# on two shell scripts which exist in the same directory.
#
# - launch-consul-config.sh
# - docker-entrypoint.sh
#
set -e
#!/bin/bash -e
CONSUL_ARGS="-server -client=0.0.0.0 -bind=127.0.0.1 -bootstrap -ui"
# start consul in the background
"$SNAP/bin/consul" agent \
-data-dir="$SNAP_DATA/consul/data" \
-config-dir="$SNAP_DATA/consul/config" \
-server -client=0.0.0.0 -bind=127.0.0.1 -bootstrap -ui &
CONSUL_DATA_DIR="$SNAP_DATA"/consul/data
CONSUL_CONFIG_DIR="$SNAP_DATA"/consul/config
LOG_DIR="$SNAP_COMMON"
# loop trying to connect to consul, as soon as we are successful exit
# NOTE: ideally consul would be able to notify systemd directly, but currently
# it only uses systemd's notify socket if consul is _joining_ another cluster
# and not when bootstrapping
# see https://github.com/hashicorp/consul/issues/4380
# Handle directory creation & data cleanup
if [ -e "$CONSUL_DATA_DIR" ] ; then
rm -rf "${CONSUL_DATA_DIR:?}"/*
else
mkdir -p "$CONSUL_DATA_DIR"
fi
if [ ! -e "$CONSUL_CONFIG_DIR" ] ; then
mkdir -p "$CONSUL_CONFIG_DIR"
fi
if [ ! -e "$LOG_DIR" ] ; then
mkdir -p "$LOG_DIR"
fi
# Run available startup hooks to have a point to store custom
# logic outside of this script. More of the things from above
# should be moved into these.
#for hook in $SNAP/startup-hooks/* ; do
# [ -x "$hook" ] && /bin/sh -x "$hook"
#done
exec "$SNAP"/bin/consul agent \
-data-dir="$CONSUL_DATA_DIR" \
-config-dir="$CONSUL_CONFIG_DIR" \
$CONSUL_ARGS | tee "$LOG_DIR"/core-consul.log
# to actually test if consul is ready, we simply check to see if consul
# itself shows up in it's service catalog
# also note we don't have a timeout here because we use start-timeout for this
# daemon so systemd will kill us if we take too long waiting for this
CONSUL_URL=http://localhost:8500/v1/catalog/service/consul
until [ -n "$(curl -s $CONSUL_URL | jq -r '. | length')" ] &&
[ "$(curl -s $CONSUL_URL | jq -r '. | length')" -gt "0" ] ; do
sleep 1
done
......@@ -14,5 +14,5 @@ JAVA="$SNAP"/usr/lib/jvm/java-8-openjdk-"$ARCH"/jre/bin/java
$JAVA -jar -Djava.security.egd=file:/dev/urandom -Xmx100M \
-Dspring.cloud.consul.enabled=true \
-Dspring.cloud.consul.host=localhost \
-Dlogging.file=$SNAP_COMMON/logs/edgex-support-rulesengine.log \
$SNAP/jar/support-rulesengine/support-rulesengine.jar
-Dlogging.file="$SNAP_COMMON/logs/edgex-support-rulesengine.log" \
"$SNAP/jar/support-rulesengine/support-rulesengine.jar"
#!/bin/bash -e
export CONFIG_DIR=${SNAP_DATA}/config
export SEC_SEC_STORE_CONFIG_DIR=${CONFIG_DIR}/security-secret-store
export SEC_API_GATEWAY_CONFIG_DIR=${CONFIG_DIR}/security-api-gateway
export CONFIG_DIR=$SNAP_DATA/config
export SEC_SEC_STORE_CONFIG_DIR=$CONFIG_DIR/security-secret-store
export SEC_API_GATEWAY_CONFIG_DIR=$CONFIG_DIR/security-api-gateway
# run the vault-worker
cd ${SEC_SEC_STORE_CONFIG_DIR}
cd "$SEC_SEC_STORE_CONFIG_DIR"
$SNAP/bin/vault-worker --init=true --configfile=${SEC_SEC_STORE_CONFIG_DIR}/res/configuration.toml
"$SNAP/bin/vault-worker" --init=true --configfile="$SEC_SEC_STORE_CONFIG_DIR/res/configuration.toml"
# copy the kong access token to the config directory for the security-api-gateway so it has
# perms to read the certs from vault and upload them into kong
cp ${SEC_SEC_STORE_CONFIG_DIR}/res/kong-token.json ${SEC_API_GATEWAY_CONFIG_DIR}/res/kong-token.json
cp "$SEC_SEC_STORE_CONFIG_DIR/res/kong-token.json" "$SEC_API_GATEWAY_CONFIG_DIR/res/kong-token.json"
#!/bin/bash
# unfortunately until snapd bug https://bugs.launchpad.net/snapd/+bug/1796125
# is fixed, on install services startup is not guaranteed in the right order
# and as such, we need to do a little bit of hand holding for the go services,
# which only have 10 seconds for consul to come alive, but on some systems
# it can be more than 10 seconds until consul comes alive in which case
# the services will fail to startup
# this gives consul 50 seconds to come up
MAX_TRIES=10
while [ "$MAX_TRIES" -gt 0 ] ; do
CONSUL_RUNNING=$(curl -s http://localhost:8500/v1/catalog/service/consul)
if [ $? -ne 0 ] ||
[ -z "$CONSUL_RUNNING" ] ||
[ "$CONSUL_RUNNING" = "[]" ] || [ "$CONSUL_RUNNING" = "" ]; then
echo "$1: consul not running; remaing tries: $MAX_TRIES"
sleep 5
MAX_TRIES=$(($MAX_TRIES - 1))
else
break
fi
done
......@@ -3,7 +3,7 @@
# this file maintains a number of changes, i.e. all
# hostnames are localhost instead of the docker hostnames,
# the tls cert and key files reference localhost as the common name,
# and the location of the files uses reference to $SNAP_DATA
# and the location of the files uses reference to SNAP_DATA
# (but note that these paths have to be absolute, so we process the env
# vars during the install hook)
......
此差异已折叠。