2.22 score from hupso.pl for:
mwclearning.com



HTML Content


Titlemark's it blog

Length: 19, Words: 4
Description kubernetes examples have been...google cloud shell -...service. the services must...kubernetes-on-vagrant.html that set up...deployment pipline with

Length: 148, Words: 20
Keywords kubernetes,google,service,kubernetes,deployment,error,requests,gatling,letsencrypt,evernote,aws,cloud
Robots noindex,follow,noodp,noarchive,noydir
Charset UTF-8
Og Meta - Title pusty
Og Meta - Description pusty
Og Meta - Site name pusty
Tytuł powinien zawierać pomiędzy 10 a 70 znaków (ze spacjami), a mniej niż 12 słów w długości.
Meta opis powinien zawierać pomiędzy 50 a 160 znaków (łącznie ze spacjami), a mniej niż 24 słów w długości.
Kodowanie znaków powinny być określone , UTF-8 jest chyba najlepszy zestaw znaków, aby przejść z powodu UTF-8 jest bardziej międzynarodowy kodowaniem.
Otwarte obiekty wykresu powinny być obecne w stronie internetowej (więcej informacji na temat protokołu OpenGraph: http://ogp.me/)

SEO Content

Words/Characters 10019
Text/HTML 22.19 %
Headings H1 1
H2 10
H3 8
H4 4
H5 0
H6 0
H1
mark's it blog
H2
deploying microservices
introduction to microservices and containers with docker
intro to kubernetes
testing kubernetes and coreos
why look at kubernetes and coreos
monitoring client side performance and javascript errors
getting started with gatling – part 2
getting started with gatling – part 1
transitioning from standard ca to letencrypt!
download all evernote attachments via evernote api with python
H3
using the recorder
dealing with authentication headers
requests dependent on the previous response
validating responses
working through the gatling quickstart
provisioning and auto-renewing apache and nginx tls/ssl certs
provisioning and auto-renewing aws elastice load balancer tls/ssl certs
H4 start with the basics
why coreos for docker hosts?
why kubernetes?
next steps
H5
H6
strong
deployments
services
on to building containers with docker
create a load balanced nginx deployment:
deploy demo application
new relic browser
google analytics
appdynamics browser
sentry.io 
end user experience
b
i
deployments
services
on to building containers with docker
create a load balanced nginx deployment:
deploy demo application
new relic browser
google analytics
appdynamics browser
sentry.io 
end user experience
em deployments
services
on to building containers with docker
create a load balanced nginx deployment:
deploy demo application
new relic browser
google analytics
appdynamics browser
sentry.io 
end user experience
Bolds strong 10
b 0
i 10
em 10
Zawartość strony internetowej powinno zawierać więcej niż 250 słów, z stopa tekst / kod jest wyższy niż 20%.
Pozycji używać znaczników (h1, h2, h3, ...), aby określić temat sekcji lub ustępów na stronie, ale zwykle, użyj mniej niż 6 dla każdego tagu pozycje zachować swoją stronę zwięzły.
Styl używać silnych i kursywy znaczniki podkreślić swoje słowa kluczowe swojej stronie, ale nie nadużywać (mniej niż 16 silnych tagi i 16 znaczników kursywy)

Statystyki strony

twitter:title pusty
twitter:description pusty
google+ itemprop=name pusty
Pliki zewnętrzne 23
Pliki CSS 9
Pliki javascript 14
Plik należy zmniejszyć całkowite odwołanie plików (CSS + JavaScript) do 7-8 maksymalnie.

Linki wewnętrzne i zewnętrzne

Linki 140
Linki wewnętrzne 12
Linki zewnętrzne 128
Linki bez atrybutu Title 78
Linki z atrybutem NOFOLLOW 0
Linki - Użyj atrybutu tytuł dla każdego łącza. Nofollow link jest link, który nie pozwala wyszukiwarkom boty zrealizują są odnośniki no follow. Należy zwracać uwagę na ich użytkowania

Linki zewnętrzne

mark's it blog https://mwclearning.com
infosec https://mwclearning.com/?cat=31
itops https://mwclearning.com/?cat=25
random https://mwclearning.com/?cat=1
data mining https://mwclearning.com/?cat=24
scalable microservices with kubernetes https://mwclearning.com/?cat=35
intro to devops https://mwclearning.com/?cat=34
functional programming – scala https://mwclearning.com/?cat=27
introduction to parallel programming https://mwclearning.com/?cat=28
advanced network security https://mwclearning.com/?cat=22
network security https://mwclearning.com/?cat=17
reading unit – dos research https://mwclearning.com/?cat=23
natural computation for intell. sys. https://mwclearning.com/?cat=20
intelligent systems https://mwclearning.com/?cat=18
grid computing https://mwclearning.com/?cat=13
it research methods https://mwclearning.com/?cat=21
foundations of programming https://mwclearning.com/?cat=9
data communications https://mwclearning.com/?cat=15
systems analysis and design https://mwclearning.com/?cat=3
internet application development https://mwclearning.com/?cat=12
computer technologies and o/s https://mwclearning.com/?cat=8
database technology https://mwclearning.com/?cat=5
rss http://www.mwclearning.com/?feed=rss2
infosec https://mwclearning.com/?cat=31
itops https://mwclearning.com/?cat=25
random https://mwclearning.com/?cat=1
data mining https://mwclearning.com/?cat=24
scalable microservices with kubernetes https://mwclearning.com/?cat=35
intro to devops https://mwclearning.com/?cat=34
functional programming – scala https://mwclearning.com/?cat=27
introduction to parallel programming https://mwclearning.com/?cat=28
advanced network security https://mwclearning.com/?cat=22
network security https://mwclearning.com/?cat=17
reading unit – dos research https://mwclearning.com/?cat=23
natural computation for intell. sys. https://mwclearning.com/?cat=20
intelligent systems https://mwclearning.com/?cat=18
grid computing https://mwclearning.com/?cat=13
it research methods https://mwclearning.com/?cat=21
foundations of programming https://mwclearning.com/?cat=9
data communications https://mwclearning.com/?cat=15
systems analysis and design https://mwclearning.com/?cat=3
internet application development https://mwclearning.com/?cat=12
computer technologies and o/s https://mwclearning.com/?cat=8
database technology https://mwclearning.com/?cat=5
rss http://www.mwclearning.com/?feed=rss2
deploying microservices https://mwclearning.com/?p=1756
october 21, 2016 https://mwclearning.com/?m=20161021
https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws-render.html https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws-render.html
introduction to microservices and containers with docker https://mwclearning.com/?p=1742
october 20, 2016 https://mwclearning.com/?m=20161020
scalable microservices with kubernetes https://www.udacity.com/course/scalable-microservices-with-kubernetes--ud615
the twelve-factor app https://12factor.net/
golang https://golang.org/
google cloud shell https://cloud.google.com/shell/docs/
docker https://www.docker.com/
kubernetes http://kubernetes.io/
google container engine https://cloud.google.com/container-engine/
google cloud shell https://cloud.google.com/shell/docs/quickstart
twelve-factor apps https://12factor.net
json web tokens https://jwt.io
intro to kubernetes https://mwclearning.com/?p=1747
october 20, 2016 https://mwclearning.com/?m=20161020
kubernetes cheat sheet http://kubernetes.io/docs/user-guide/kubectl-cheatsheet/
testing kubernetes and coreos https://mwclearning.com/?p=1726
october 18, 2016 https://mwclearning.com/?m=20161018
https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant.html https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant.html
https://gist.github.com/iamsortiz/9b802caf7d37f678e1be18a232c3cc08 https://gist.github.com/iamsortiz/9b802caf7d37f678e1be18a232c3cc08
http://kubernetes.io/docs/user-guide/configuring-containers/ http://kubernetes.io/docs/user-guide/configuring-containers/
https://github.com/kubernetes/kubernetes/blob/release-1.3/examples/guestbook/readme.md https://github.com/kubernetes/kubernetes/blob/release-1.3/examples/guestbook/readme.md
ingress resource http://kubernetes.io/docs/user-guide/ingress/#what-is-ingress
https://github.com/kubernetes/contrib/tree/master/ingress/controllers https://github.com/kubernetes/contrib/tree/master/ingress/controllers
https://amdatu.org/infra/ https://amdatu.org/infra/
why look at kubernetes and coreos https://mwclearning.com/?p=1721
september 29, 2016 https://mwclearning.com/?m=20160929
https://coreos.com/why/ https://coreos.com/why/
http://kubernetes.io/docs/whatisk8s/ http://kubernetes.io/docs/whatisk8s/
monitoring client side performance and javascript errors https://mwclearning.com/?p=1713
september 21, 2016 https://mwclearning.com/?m=20160921
rise of single page apps https://medium.com/@vivainio/angular-is-ok-49bfd7924fc1#.vtdczsmd7
sentry.io https://sentry.io/
new relic browser https://newrelic.com/browser-monitoring
appdynamics browser rum https://www.appdynamics.com/product/browser-real-user-monitoring/
dynatrace uem https://www.dynatrace.com/en/freemiums/personal-license-terms.html
saas offering  https://www.dynatrace.com/trial/
http://www.davidverhasselt.com/an-easy-javascript-error-logger-using-ga/ http://www.davidverhasselt.com/an-easy-javascript-error-logger-using-ga/
icinga2 https://www.icinga.org/products/icinga-2/
slack https://slack.com
unix philosophy https://en.wikipedia.org/wiki/unix_philosophy#origin
getting started with gatling – part 2 https://mwclearning.com/?p=1678
june 5, 2016 https://mwclearning.com/?m=20160605
check api http://gatling.io/docs/2.2.1/http/http_check.html
jsonpath http://goessner.net/articles/jsonpath/
foreach http://gatling.io/docs/2.2.1/general/scenario.html?highlight=foreach#loop-statements
queue application performance sales pitch https://blog.kissmetrics.com/wp-content/uploads/2011/04/loading-time-lrg.jpg
checks http://gatling.io/docs/2.2.1/http/http_check.html#concepts
assertions api http://gatling.io/docs/2.0.0-rc2/general/assertions.html
getting started with gatling – part 1 https://mwclearning.com/?p=1669
may 30, 2016 https://mwclearning.com/?m=20160530
gatling https://gatling.io/
jmeter https://jmeter.apache.org/
https://blog.flood.io/stress-testing-jmeter-and-gatling/ https://blog.flood.io/stress-testing-jmeter-and-gatling/
https://octoperf.com/blog/2015/06/08/jmeter-vs-gatling/ https://octoperf.com/blog/2015/06/08/jmeter-vs-gatling/
http://badmoodperf.blogspot.com.au/2014/01/gatling-vs-jmeter-fact-checking.html http://badmoodperf.blogspot.com.au/2014/01/gatling-vs-jmeter-fact-checking.html
http://gatling.io/docs/2.2.1/quickstart.html#quickstart http://gatling.io/docs/2.2.1/quickstart.html#quickstart
http://gatling.io/docs/2.2.1/advanced_tutorial.html#advanced-tutorial http://gatling.io/docs/2.2.1/advanced_tutorial.html#advanced-tutorial
expression language http://gatling.io/docs/2.0.1/session/expression_el.html
transitioning from standard ca to letencrypt! https://mwclearning.com/?p=1658
may 29, 2016 https://mwclearning.com/?m=20160529
https://letsencrypt.org/ https://letsencrypt.org/
acme protocol https://github.com/letsencrypt/acme-spec
alex gaynor https://alexgaynor.net/
https://github.com/alex/letsencrypt-aws https://github.com/alex/letsencrypt-aws
https://hub.docker.com/r/alexgaynor/letsencrypt-aws/ https://hub.docker.com/r/alexgaynor/letsencrypt-aws/
https://hub.docker.com/r/alexgaynor/letsencrypt-aws/ https://hub.docker.com/r/alexgaynor/letsencrypt-aws/
https://github.com/markz0r/tools/tree/master/ssl_check_complete https://github.com/markz0r/tools/tree/master/ssl_check_complete
download all evernote attachments via evernote api with python https://mwclearning.com/?p=1647
april 24, 2016 https://mwclearning.com/?m=20160424
https://github.com/markz0r/tools/blob/master/backup_scripts/evernote_backup.py https://github.com/markz0r/tools/blob/master/backup_scripts/evernote_backup.py
2 https://mwclearning.com/?paged=2
3 https://mwclearning.com/?paged=3
21 https://mwclearning.com/?paged=21
» https://mwclearning.com/?paged=2
markz0r () https://github.com/markz0r
-

markz0r
https://github.com/markz0r
https://mwclearning.com/ https://mwclearning.com/
5 public repositories https://github.com/markz0r/repositories
0 public gists https://gist.github.com/markz0r
design by frenchtastic.eu http://frenchtastic.eu

Zdjęcia

Zdjęcia 4
Zdjęcia bez atrybutu ALT 2
Zdjęcia bez atrybutu TITLE 3
Korzystanie Obraz ALT i TITLE atrybutu dla każdego obrazu.

Zdjęcia bez atrybutu TITLE

https://mwclearning.com/wp-content/uploads/2016/10/screenshot-2016-10-21-14.34.27.png
http://www.mwclearning.com/wp-content/uploads/2016/09/apm_logos.png
https://avatars.githubusercontent.com/u/965817?v=3

Zdjęcia bez atrybutu ALT

https://assets-cdn.github.com/favicon.ico
https://avatars.githubusercontent.com/u/965817?v=3

Ranking:


Alexa Traffic
Daily Global Rank Trend
Daily Reach (Percent)









Majestic SEO











Text on page:

mark's it blog search for: infosec itops random projects data mining courses scalable microservices with kubernetes intro to devops functional programming – scala introduction to parallel programming uni advanced network security network security reading unit – dos research natural computation for intell. sys. intelligent systems grid computing it research methods foundations of programming data communications systems analysis and design internet application development computer technologies and o/s database technology rss toggle navigation infosec itops random projects data mining courses scalable microservices with kubernetes intro to devops functional programming – scala introduction to parallel programming uni advanced network security network security reading unit – dos research natural computation for intell. sys. intelligent systems grid computing it research methods foundations of programming data communications systems analysis and design internet application development computer technologies and o/s database technology rss search for: deploying microservices posted on october 21, 2016 so far the kubernetes examples have been little more than what could be accomplished with bash, docker and jenkins. no we shall look at how kubernetes can be used for more effective management of application deployment and configuration. enter desired state. deployments are used to define our desired state then work with replica controllers to ensure desired state is met. a deployment is an abstraction from a pods. services are used to group pods and provide an interface to them. scaling is up next. using the deployments configuration file updating the replicas field and running kubectl apply -f is all that needs to be done! well its not quite that simple. that scales the number of replica pods deployed to our kubernetes cluster. it does not change the amount of machine (vm/physical) resources in the cluster. so… i would not really call this scaling :(. onto updating (patching, new version etc). there are two types of deployments, rollout and blue-green. rollouts can be conducted by updating the deployment config (container->image) reference then running kubectl apply -f. this will automatically conduct a staged rollout. ok so that’s the end of the course – it was not very deep, and did not cover anything like dealing with persistent layers. nonetheless it was good to review the basics. next step is to understand the architecture or our my application running on kubernetes in aws. at first i read a number of threads stating that kubernetes does not support cross availability zone clusters in aws. cross availabilty zone clusters are supported on aws: kube-aws supports “spreading” a cluster across any number of availability zones in a given region. https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws-render.html. with that in mind the following architecture is what i will be moving to: kubernetes high level architecture instead of ha proxy i will stick with out existing nginx reverse proxy. introduction to microservices and containers with docker posted on october 20, 2016 after running through some unguided examples of kubernetes i still don’t feel confident that i am fully grasping the correct ways to leverage the tool. fortunately there is a course on udacity that seems to be right on topic…scalable microservices with kubernetes. the first section, introduction to microservices references a number of resources including the twelve-factor app which is a nice little manifesto. the tools used in the course are: golang – a newish programming language from the creators for c (at google) google cloud shell – temp vm preloaded with the tools need to manage our clusters docker – to package, distribute, and run our application kubernetes – to handle management, deployment and scaling of application google container engine – gke is a hosted kubernetes service the introduction to microservices lesson goes on to discuss the benefits for microservices and why they are being used (boils down to faster development). the increased requirements for automation with microservices are also highlighted. we then go on to set up gce (google compute engine), creating a new project and enabling the compute engine and container engine apis. to manage the google cloud platform project we used the google cloud shell. on the google cloud shell we did some basic testing and installation of golang, i am not sure what the point of that was as the cloud shell is just a management tool(?). next step was a review of twelve-factor apps (portable, continually deployable, scalable) json web tokens (jwt authenticating and validating client->microservice messages) all pretty straight forward — on to building containers with docker. now we want to build, package, distribute and run our code. creating containers is easy with docker and that enables us to be more sure about the dependencies and run environment of our microservices. part 1 – spin up a vm: shell # set session zone gcloud config set compute/zone asia-east1-c # start instance gcloud compute instances create ubuntu \ --image-project ubuntu-os-cloud \ --image ubuntu-1604-xenial-v20160420c # login gcloud compute ssh ubuntu # note that starting an instance like this make is open to the world on all ports 123456789 # set session zonegcloud config set compute/zone asia-east1-c# start instancegcloud compute instances create ubuntu \--image-project ubuntu-os-cloud \--image ubuntu-1604-xenial-v20160420c# logingcloud compute ssh ubuntu # note that starting an instance like this make is open to the world on all ports after demonstrating how difficult it is to run multiple instances/versions of a service on an os the arguments for containers and the isolation they enable we brought forth. process(kind of), package, network, namespace etc. a basic docker demo was then conducted, followed by creating a couple of dockerfiles, building some images and starting some containers. the images where then pushed to a registry with some discussion on public and private registries. intro to kubernetes posted on october 20, 2016 ok- now we are getting to the interesting stuff. given we have a microservices architecture using docker, how do we effectively operate our service. the services must include production environments, testing, monitoring, scaling etc.  problems/challenges with microservices – organisational structure, automation requirements, discovery requirements. we have seen how to package up a single service but that is a small part of the operating microservices problem. kubernetes is suggested as a solution for: app configuration service discovery managing updates/deployments monitoring create a cluster (ie: coreos cluster) and treat is as a single machine. into a practical example. shell # initate kubernetes cluster on gce gcloud container clusters create k0 # launch a single instance kubectl run nginx --image=nginx:1.10.0 # list pods kubectl get pods # expose nginx to the world via a load balancer provisioned by gce kubectl expose deployment nginx --port 80 --type loadbalancer # list services kubectl get services 12345678910 # initate kubernetes cluster on gcegcloud container clusters create k0# launch a single instancekubectl run nginx --image=nginx:1.10.0# list podskubectl get pods# expose nginx to the world via a load balancer provisioned by gcekubectl expose deployment nginx --port 80 --type loadbalancer# list serviceskubectl get services kubernetes cheat sheet next was a discussion of the kubernetes components: pods (containers, volumes, namespace, single ip) monitoring, readiness/health checks configmaps and secrets services lables creating secrets: shell # initate kubernetes cluster on gce # create secrets for all files in dir kubectl create secret generic tls-certs --from-file=tls/ # describe secrets you have just created kubectl describe secrets tls-certs # create a configmap kubectl create configmap nginx-proxy-conf --from-file=nginx/proxy.conf # describe the configmap just created kubectl describe configmap nginx-proxy-conf 123456789 # initate kubernetes cluster on gce# create secrets for all files in dirkubectl create secret generic tls-certs --from-file=tls/# describe secrets you have just createdkubectl describe secrets tls-certs# create a configmapkubectl create configmap nginx-proxy-conf --from-file=nginx/proxy.conf# describe the configmap just createdkubectl describe configmap nginx-proxy-conf now that we have our tls-secrets and nginx-proxy-conf defined in the kubernetes cluster, they must be exposed to the correct pods. this is accomplished within the pod yaml definition: yaml volumes: - name: "tls-certs" secret: secretname: "tls-certs" - name: "nginx-proxy-conf" configmap: name: "nginx-proxy-conf" items: - key: "proxy.conf" path: "proxy.conf" 12345678910 volumes: - name: "tls-certs" secret: secretname: "tls-certs" - name: "nginx-proxy-conf" configmap: name: "nginx-proxy-conf" items: - key: "proxy.conf" path: "proxy.conf" in production you will want expose pods using services. sevices are a persistent endpoint for pods. if pods has a specific label then they will automatically be added to the correct service pool when confirmed alive. there are currently 3 service types: cluster ip – internal only nodeport – each node gets an external ip that is accessible loadbalance – a load balancer from the cloud service provider (gce and aws(?) only) accessing a service using nodeport: shell # initate kubernetes cluster on gce # create a service kubectl create -f ./services/monolith.yaml kind: service apiversion: v1 metadata: name: "monolith" spec: selector: app: "monolith" secure: "enabled" ports: - protocol: "tcp" port: 443 targetport: 443 nodeport: 31000 type: nodeport # open the nodeport port to the world on all cluster nodes gcloud compute firewall-rules create allow-monolith-nodeport --allow=tcp:31000 # list external ip of compute nodes gcloud compute instances list name zone machine_type preemptible internal_ip external_ip status gke-k0-default-pool-0bcbb955-32j6 asia-east1-c n1-standard-1 10.140.0.4 104.199.198.133 running gke-k0-default-pool-0bcbb955-7ebn asia-east1-c n1-standard-1 10.140.0.3 104.199.150.12 running gke-k0-default-pool-0bcbb955-h7ss asia-east1-c n1-standard-1 10.140.0.2 104.155.208.48 running 12345678910111213141516171819202122232425 # initate kubernetes cluster on gce# create a servicekubectl create -f ./services/monolith.yamlkind: serviceapiversion: v1metadata: name: "monolith"spec: selector: app: "monolith" secure: "enabled" ports: - protocol: "tcp" port: 443 targetport: 443 nodeport: 31000 type: nodeport# open the nodeport port to the world on all cluster nodesgcloud compute firewall-rules create allow-monolith-nodeport --allow=tcp:31000# list external ip of compute nodesgcloud compute instances listname zone machine_type preemptible internal_ip external_ip statusgke-k0-default-pool-0bcbb955-32j6 asia-east1-c n1-standard-1 10.140.0.4 104.199.198.133 runninggke-k0-default-pool-0bcbb955-7ebn asia-east1-c n1-standard-1 10.140.0.3 104.199.150.12 runninggke-k0-default-pool-0bcbb955-h7ss asia-east1-c n1-standard-1 10.140.0.2 104.155.208.48 running now any request to those external_ips on port 31000 will be routed to pods that have label “app=monolith,secure=enabled” (as defined in the service yaml) shell # get pods meeting service label definition kubectl get pods -l "app=monolith,secure=enabled" kubectl describe pods secure-monolith 123 # get pods meeting service label definitionkubectl get pods -l "app=monolith,secure=enabled"kubectl describe pods secure-monolith okay – so that, like the unguided demo i worked through previously was very light on. i am still not clear on how i would many a microservices application using the kubernetes tool. how do i do deployments, how to i monitor and alert, how do i load balance (if not in google cloud), how to i do service discovery/enrollment. theres one more lesson to go in the course, so hopefully “deploying microservices” is more illuminating. testing kubernetes and coreos posted on october 18, 2016 in the previous post i described some the general direction and ‘wants’ for the next step of our it ops, summarised as: want description continuous deployment we need to have more automation and resiliency in our deployment, without adding our own code that needs to be changes when archtecture and service decencies change. automation of deployments deployments, rollbacks, services discovery, easy local deployments for devs less time on updates automation of updates reduced dependence on config management (puppet) reduce number of puppet policies that are applied hosts image management image management (with immutable post deployment) reduce baseline work for it staff it staff have low baseline work, more room for initiatives reduce hardware footprint there can be no increase in hardware resource requirements (cost). start with the basics lets start with the simple demo deployment supplied by the coreos team. https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant.html that set up was pretty straight forward (as supplied demos usually are).  simple verification that the k8s components are up and running: shell vagrant global-status #expected output assuming 1 etcd, 1 k8s controller and 2 k8s worker as defined in config.rb id name provider state directory ---------------------------------------------------------------------------------------------------------- 2146bec e1 virtualbox running virtualbox vms/coreos-kubernetes/multi-node/vagrant 87d498b c1 virtualbox running virtualbox vms/coreos-kubernetes/multi-node/vagrant 46bac62 w1 virtualbox running virtualbox vms/coreos-kubernetes/multi-node/vagrant f05e369 w2 virtualbox running virtualbox vms/coreos-kubernetes/multi-node/vagrant #set kubctl config and context export kubeconfig="${kubeconfig}:$(pwd)/kubeconfig" kubectl config use-context vagrant-multi kubectl get nodes #expected output name status age 172.17.4.101 ready,schedulingdisabled 4m 172.17.4.201 ready 4m 172.17.4.202 ready 4m kubectl cluster-info #expected output kubernetes master is running at https://172.17.4.101:443 heapster is running at https://172.17.4.101:443/api/v1/proxy/namespaces/kube-system/services/heapster kubedns is running at https://172.17.4.101:443/api/v1/proxy/namespaces/kube-system/services/kube-dns kubernetes-dashboard is running atvagran 12345678910111213141516171819202122232425 vagrant global-status #expected output assuming 1 etcd, 1 k8s controller and 2 k8s worker as defined in config.rbid name provider state directory----------------------------------------------------------------------------------------------------------2146bec e1 virtualbox running virtualbox vms/coreos-kubernetes/multi-node/vagrant87d498b c1 virtualbox running virtualbox vms/coreos-kubernetes/multi-node/vagrant46bac62 w1 virtualbox running virtualbox vms/coreos-kubernetes/multi-node/vagrantf05e369 w2 virtualbox running virtualbox vms/coreos-kubernetes/multi-node/vagrant #set kubctl config and contextexport kubeconfig="${kubeconfig}:$(pwd)/kubeconfig"kubectl config use-context vagrant-multikubectl get nodes#expected outputname status age172.17.4.101 ready,schedulingdisabled 4m172.17.4.201 ready 4m172.17.4.202 ready 4m kubectl cluster-info#expected outputkubernetes master is running at https://172.17.4.101:443heapster is running at https://172.17.4.101:443/api/v1/proxy/namespaces/kube-system/services/heapsterkubedns is running at https://172.17.4.101:443/api/v1/proxy/namespaces/kube-system/services/kube-dnskubernetes-dashboard is running atvagran *note: it can take some time (5 mins or longer if core-os is updating) for the kubernetes cluster to become available. to see status, vagrant ssh c1 (or w1/w2/e1) and run journalctl -f (following service logs). accessing the kubernetes dashboard requires tunnelling, which if using the vagrant set up can be accomplished with: https://gist.github.com/iamsortiz/9b802caf7d37f678e1be18a232c3cc08 (note, that is for single node, if using multinode then change line 21 to: shell vagrant ssh c1 -c "if [ ! -d /home/$username ]; then sudo useradd $username -m -s /bin/bash && echo '$username:$password' | sudo chpasswd; fi" 1 vagrant ssh c1 -c "if [ ! -d /home/$username ]; then sudo useradd $username -m -s /bin/bash && echo '$username:$password' | sudo chpasswd; fi" now the dashboard can be access on http://localhost:9090/. now lets to some simple k8s examples: create a load balanced nginx deployment: shell # create 2 containers from nginx image (docker hub) kubectl run my-nginx --image=nginx --replicas=2 --port=80 # expose the service to the internet kubectl expose deployment my-nginx --target-port=80 --type=loadbalancer # list service nodes kubectl get po # show service info kubectl get service my-nginx kubectl describe service/my-nginx 123456789 # create 2 containers from nginx image (docker hub)kubectl run my-nginx --image=nginx --replicas=2 --port=80# expose the service to the internetkubectl expose deployment my-nginx --target-port=80 --type=loadbalancer# list service nodeskubectl get po# show service infokubectl get service my-nginxkubectl describe service/my-nginx first interesting point… with simple deployment above, i have already gone awry. though i have 2 nginx containers (presumably for redundancy and load balancing), they have both been deployed on the same worker node (host). lets not get bogged down now — will keep working through examples which probably cover how to ensure redundancy across hosts. # delete the service, removes pods and containers kubectl delete deployment,service my-nginx 12 # delete the service, removes pods and containerskubectl delete deployment,service my-nginx reviewed config file (pod) options: http://kubernetes.io/docs/user-guide/configuring-containers/ deploy demo application https://github.com/kubernetes/kubernetes/blob/release-1.3/examples/guestbook/readme.md create service for redis master, redis slaves and frontent create a deployment for redis master, redis slaves and frontend pretty easy.. now how do we get external traffic to the service? either nodeport’s, loadbalancers or ingress resource (?). next lets look at how to extend kubernetes to experiment with nodeport’s, loadbalancers and ingress resources create an ingress controller (as a pod) – https://github.com/kubernetes/contrib/tree/master/ingress/controllers create an ingress resource to route requests enable deployments, external load balancing and machine scaling (https://amdatu.org/infra/) why look at kubernetes and coreos posted on september 29, 2016 we are currently operating a service oriented architecture that is ‘dockerized’ with both host and containers running centos 7 when deployed straight on top of ec2 instances. we also have a deployment pipline with beanstalk + homegrown scripts. i imagine our position/maturity is similar to a lot of smes, we have aspirations of being on top of new technologies/practices but are some where in between old school and new school: old school new school it and dev separate devops (ops and devs have the same goals and responsibilities) monolithic/large services microservices big releases continuous deployment some automation almost total automation with self-service static scaling dynamic scaling config management image management (with immutable deployments) it staff have a high baseline work it staff have low baseline work, more room for initiatives this is not about which end of this incomplete spectrum is better… we have decided that for our place in the world, moving further the left is desirable. i know there are a lot of experienced it operators that take this view: why coreos for docker hosts? coreos: a lightweight linux operating system designed for clustered deployments providing automation, security, and scalability for your most critical applications – https://coreos.com/why/ our application and supporting services run in docker, there should not be any dependencies on the host operating system (apart from the docker engine and storage mounts). some questions i ask myself now: why do i need to monitor for and stage deployments of updates? why am i managing packages on a host os that could be immutable (like coreos is, kind of)? why am i managing what should be homogeneous machines with puppet? why am i nursing host machines back to health when things go wrong (instead of blowing them away and redeploying)? why do i need to monitor se linux events? i want a docker host os that is/has: smaller, stricter, homogeneous and disposable built in hosts and service clustering as little management as possible post deployment coreos looks good for removing the first set of questions and sufficing the wants. why kubernetes? kubernetes: “a platform for automating deployment, scaling, and operations of application containers across clusters of hosts” – http://kubernetes.io/docs/whatisk8s/ some questions i ask myself now: should my deployment, monitoring and scaling completely separate or be a platform? why do i (it ops) still need to be around for prod deployments (no automatic success criteria for staged deploys and not automatic rollback)? why are our deployment scripts so complex and non-portable do i want a scaling solution outside of aws auto-scaling groups? i want a tool/platform to: streamline and rationalise our complex deployment process make monitoring, scaling and deployment more manageable without our lines of homebaked scripts generally make our monitoring, scaling and deployment more able to meet changing requirements kubernetes looks good for removing the first set of questions and sufficing the wants. next steps create a coreos cluster install kubernetes on the cluster deploy an application via kubernetes assess if coreos and kubernetes take us in a direction we want to go monitoring client side performance and javascript errors posted on september 21, 2016 the rise of single page apps (ie angularjs) present some interesting problems for ops. specifically, the increased dependence on browser executed code means that real user experience monitoring is a must. to that end i have reviewed some javascript agent monitoring solutions: sentry.io new relic browser appdynamics browser rum dynatrace uem – didn’t end up testing the saas offering  google analytics with custom event push: http://www.davidverhasselt.com/an-easy-javascript-error-logger-using-ga/ the solution/s must have the following requirements: must have: detailed javascript error reporting negligible performance impact real user performance monitoring effective single page app (anglularjs support) real time alerting nice to have: low cost easy to deploy and maintain integration easy integration with tools we use for notifications (icinga2, slack) as our application is a single page angular app, new relic browser requires that we pay us$130 for any single page app capability. the javascript error detection was not very impressive as uncaught exceptions outside of the angular app were not reported without angular integration. google analytics with custom event push does not have any real time alerting which disqualifies it as an ops solution. appdynamics browser was easy to integrate, getting javascript error details in the console was straight forward but getting those errors to communication tools like slack was surprisingly difficult. alerts are based on health checks which are breaking of metric thresholds – so i can send an alert saying there was more than 0 javascript errors in the last minute. but no details about the error and no direct link to the error. sentry.io simple to add monitoring, simple to get alerting with click through to all the javascript error info. no performance monitoring. conclusion sticking to the unix philosophy, using sentry.io for javascript error alerting and appdynamics browser lite for performance alerting. both have free levels to get started (ongoing, not just 30 day trial). getting started with gatling – part 2 posted on june 5, 2016 with the basics of simulations, scenarios, virtual users, sessions, feeders, checks, assertions and reports down –  it’s time to think about what to load test and how. will start with a test that tries to mimic the end user experience. that means that all the 3rd party javascript, css, images etc should be loaded. it does not seem reasonable to say our loadtest performance was great but none of our users will get a responsive app because of all those things we depend on (though, yes, most of it will likely already be cached by the user). this increases the complexity of the simulation scripts as there will be lots of additional resource requests cluttering things up. it is very important for maintainability to avoid code duplication and use the singleton object functionality available. using the recorder as i want to include cdn calls, i tried the recorder’s ‘generate ca’ functionality. this is supposed to generate certs on the fly for each cn. this would be convenient as i could just trust a locally generated ca and not have to track down and trust all sources. unfortunately i could not get the recorder to generate its own ca, and when using a local ca generated with openssl i could not feed the ca password to the recorder. i only spent 15 mins trying this until reverting to the default self signed cert. reviewing firefox’s network panel (firefox menu -> developer -> network ) shows any blocked sources which can then be visited directly and trusted with our fake cert (there are some fairly serious security implications of doing this, i personally only use my testing browser (firefox) with these types of proxy tools and never for normal browsing). the recorder is very handy for getting the raw code you need into the test script, it is not a complete test though. next up is: dealing with authentication headers –  the recorded simulation does not set the header based on response from login attempt requests dependent on the previous response – the recorder does not capture this dependency it only see the raw outbound requests so there will need to be consideration on parsing results validating responses dealing with authentication headers the check api is used for verifying that the response to a request matches expectations and capturing some elements in it. after half an hour or so of playing around the check api, it is behaving as i want thanks to good, concise doc. scala .exec(http("login-with-creds") .post("/cm/login") .headers(headers_14) .body(rawfilebody("test_user_creds.txt")) .check(headerregex("set-cookie", "access_token=(.*);version=*").saveas("auth_token")) 12345 .exec(http("login-with-creds") .post("/cm/login") .headers(headers_14) .body(rawfilebody("test_user_creds.txt")) .check(headerregex("set-cookie", "access_token=(.*);version=*").saveas("auth_token")) the “.check” is looking for the header name “set-cookie” then extracting the auth token using a regex and finally saving the token as a key called auth_token. in subsequent requests i need to include a header containing this value, and some other headers. so instead of listing them out each time a function makes things much neater: scala def authheader (auth_token:string):map[string, string] = { map("authorization" -> "bearer ".concat(auth_token), "origin" -> baseurl) } //... http("list_irs") .get(uri1 + "/information-requests") .headers(authheader("${auth_token}")) // providing the saved key value as a string arg 12345678 def authheader (auth_token:string):map[string, string] = { map("authorization" -> "bearer ".concat(auth_token), "origin" -> baseurl)} //...http("list_irs") .get(uri1 + "/information-requests") .headers(authheader("${auth_token}")) // providing the saved key value as a string arg its also worth noting that to ensure that all this was working as expected i modified /conf/logback.xml to output all http request response data to stdout. xhtml 12345678 requests dependent on the previous response with many modern applications, the behaviour of the gui is dictated by responses from an api. for example, when a user logs in, the gui requests a json file with all (max 50) of the users open requests. when the gui received this, the requests are rendered. in many cases this rendering process involves many more http requests that depending on the time and state of the users which may vary significantly. so… if we are trying to imitate end user experience instead of requesting the render info for the same open requests all of the time, we should parse the json response and adjust subsequent requests accordingly. thankfully gatling allows for the use of jsonpath. i got stuck trying to get all of the id vals out of a json return and then create requests for each of them. i had incorrectly assumed that the el gatling provided ‘random’ function could be called on a vector. this meant i thought the vector was ‘undefined’ as per the error message. the vector was in fact as expected which was clear by printing it. //grabs all id values from the response body and puts them in a vector accessible via "${answer_ids}" or sessions.get("answer_ids") http("list_irs") .get(uri1 + "/information-requests") .headers(authheader("${auth_token}")).check(status.is(200), jsonpath("$..id").findall.saveas("answer_ids")) //.... //prints all vaules in the answer_ids vector .exec(session => { val maybeid = session.get("answer_ids").asoption[string] println(maybeid.getorelse("no ids found")) session }) 1234567891011 //grabs all id values from the response body and puts them in a vector accessible via "${answer_ids}" or sessions.get("answer_ids")http("list_irs").get(uri1 + "/information-requests").headers(authheader("${auth_token}")).check(status.is(200), jsonpath("$..id").findall.saveas("answer_ids")) //....//prints all vaules in the answer_ids vector.exec(session => { val maybeid = session.get("answer_ids").asoption[string] println(maybeid.getorelse("no ids found")) session}) to run queries with all of the values pulled out of the json response we can use the foreach component. again got stuck for a little while here. was putting the foreach competent within an exec function, where (as below) it should be outside of an exec and reference a chain the contains an exec. val answer_chain = exec(http("an_answer") .get(uri1 + "/information-requests/${item}/stores/answers") .headers(authheader("${auth_token}")).check(status.is(200))) //... val scn = scenario("basiclogin") /... .exec(http("list_irs") .get(uri1 + "/information-requests") .headers(authheader("${auth_token}")).check(status.is(200), jsonpath("$..id").findall.saveas("answer_ids"))), .foreach("${answer_ids}","item") { answer_chain } 12345678910 val answer_chain = exec(http("an_answer") .get(uri1 + "/information-requests/${item}/stores/answers") .headers(authheader("${auth_token}")).check(status.is(200)))//...val scn = scenario("basiclogin")/....exec(http("list_irs") .get(uri1 + "/information-requests") .headers(authheader("${auth_token}")).check(status.is(200), jsonpath("$..id").findall.saveas("answer_ids"))),.foreach("${answer_ids}","item") { answer_chain } validating responses what do we care about in responses? http response headers (generally expecting 200 ok) http response body contents – we can define expectations based on understanding of app behaviour response time – we may want to define responses taking more than 2000ms as failures (queue application performance sales pitch) checking response headers is quite simple and can be seen explicitly above in .check(status.is(200). in fact, there is no need for 200 checks to be explicit as “a status check is automatically added to a request when you don’t specify one. it checks that the http response has a 2xx or 304 status code.” — checks. http response body content checks are valuable for ensuring the app behaves as expected. they also require a lot of maintenance so it is important to implement tests using code reuse where possible. gatling is great for this as we can use the scala and all the power that comes with it (ie: reusable objects and functions across all tests). next up is response time checks. note that these response times are specific to the http layer and do not infer a good end user experience. javascript and other rendering, along with blocking requests mean that performance testing at the http layer is incomplete performance testing (though it is the meat and potatoes). gatling provides the assertions api to conduct checks globally (on all requests). there are numerous scopes, statistics and conditions to choose from there. for specific operations, responsetimeinmillis and latencyinmillis are provided by gatling – responsetimeinmillis includes the time is takes to fully send the request and fully receive the response (from the test host). as a default i use responsetimeinmillis as it has slightly higher coverage as a test. these three verifications/tests can be seen here: package mwc_gatling import scala.concurrent.duration._ import io.gatling.core.predef._ import io.gatling.http.predef._ import io.gatling.jdbc.predef._ class basiclogin extends simulation { val baseurl="https://blah.mwclearning.com" val httpprotocol = http .baseurl(baseurl) .acceptheader("application/json, text/plain, */*") .acceptencodingheader("gzip, deflate") .acceptlanguageheader("en-us,en;q=0.5") .useragentheader("mozilla/5.0 (macintosh; intel mac os x 10.11; rv:43.0) gecko/20100101 firefox/43.0") def authheader (auth_token:string):map[string, string] = { map("authorization" -> "bearer ".concat(auth_token), "origin" -> baseurl) } val answer_chain = exec(http("an_answer") .get(uri1 + "/information-requests/${item}/stores/answers") .headers(authheader("${auth_token}")).check(status.is(200), jsonpath("$..status"))) val scn = scenario("basiclogin") .exec(http("get_web_app_deps") //... bunch of get requests for js css etc .exec(http("login-with-creds") .post("/cm/login") .body(rawfilebody("test_user_creds.txt")) .check(headerregex("set-cookie", "access_token=(.*);version=*").saveas("auth_token")) //... another bunch of get for post auth deps http("list_irs") .get(uri1 + "/information-requests") .headers(authheader("${auth_token}")).check(status.is(200), jsonpath("$..id").findall.saveas("answer_ids")) //... now that we have a vector full of ids we can request those resources .foreach("${answer_ids}","item") { answer_chain } //... finally set the simulation params and assertions setup(scn.inject(atonceusers(10))).protocols(httpprotocol).assertions( global.responsetime.max.lessthan(2000), global.successfulrequests.percent.greaterthan(99)) } 1234567891011121314151617181920212223242526272829303132333435363738394041 package mwc_gatlingimport scala.concurrent.duration._import io.gatling.core.predef._import io.gatling.http.predef._import io.gatling.jdbc.predef._ class basiclogin extends simulation { val baseurl="https://blah.mwclearning.com" val httpprotocol = http .baseurl(baseurl) .acceptheader("application/json, text/plain, */*") .acceptencodingheader("gzip, deflate") .acceptlanguageheader("en-us,en;q=0.5") .useragentheader("mozilla/5.0 (macintosh; intel mac os x 10.11; rv:43.0) gecko/20100101 firefox/43.0") def authheader (auth_token:string):map[string, string] = { map("authorization" -> "bearer ".concat(auth_token), "origin" -> baseurl) } val answer_chain = exec(http("an_answer") .get(uri1 + "/information-requests/${item}/stores/answers") .headers(authheader("${auth_token}")).check(status.is(200), jsonpath("$..status"))) val scn = scenario("basiclogin") .exec(http("get_web_app_deps") //... bunch of get requests for js css etc .exec(http("login-with-creds") .post("/cm/login") .body(rawfilebody("test_user_creds.txt")) .check(headerregex("set-cookie", "access_token=(.*);version=*").saveas("auth_token")) //... another bunch of get for post auth deps http("list_irs") .get(uri1 + "/information-requests") .headers(authheader("${auth_token}")).check(status.is(200), jsonpath("$..id").findall.saveas("answer_ids")) //... now that we have a vector full of ids we can request those resources .foreach("${answer_ids}","item") { answer_chain } //... finally set the simulation params and assertions setup(scn.inject(atonceusers(10))).protocols(httpprotocol).assertions( global.responsetime.max.lessthan(2000), global.successfulrequests.percent.greaterthan(99))} that’s about all i need to get started with gatling! the next steps are: extending coverage (more tests!) putting processes in place to notify and act on identified issues refining tests to provide more information about the likely problem domain making a modular and maintainable test library that can be updated in one place to deal with changes to app aggregating results for trending and correlation with changes spin up and spin down environments specifically for load testing jenkins integration getting started with gatling – part 1 posted on may 30, 2016 with the need to do some more effective load testing i am getting started with gatling. why gatling and not jmeter? i have not used either so i don’t have a valid opinion. i made my choice based on: reduced gui dependency newer code base documentation conciseness personal preference to scala vs straight java some blog posts including: https://blog.flood.io/stress-testing-jmeter-and-gatling/ https://octoperf.com/blog/2015/06/08/jmeter-vs-gatling/ http://badmoodperf.blogspot.com.au/2014/01/gatling-vs-jmeter-fact-checking.html working through the gatling quickstart next step is working through the basic doc: http://gatling.io/docs/2.2.1/quickstart.html#quickstart. pretty simple and straightforward. moving on to the more advanced tutorial: http://gatling.io/docs/2.2.1/advanced_tutorial.html#advanced-tutorial. this included: creating objects for process isolation virtual users dynamic data with feeders and checks first usage of gatling’s expression language (not rly a language o_o) the most interesting function: object search { val feeder = csv("search.csv").random val search = exec(http("home") .get("/")) .pause(1) .feed(feeder) .exec(http("search") .get("/computers?f=${searchcriterion}") .check(css("a:contains('${searchcomputername}')", "href").saveas("computerurl"))) .pause(2) .exec(http("select") .get("${computerurl}")) .pause(3) } 1234567891011121314 object search { val feeder = csv("search.csv").random val search = exec(http("home") .get("/")) .pause(1) .feed(feeder) .exec(http("search") .get("/computers?f=${searchcriterion}") .check(css("a:contains('${searchcomputername}')", "href").saveas("computerurl"))) .pause(2) .exec(http("select") .get("${computerurl}")) .pause(3)} …simulation‘s are plain scala classes so we can use all the power of the language if needed. next covered off the key concepts in gatling: virtual user -> logical grouping of behaviours ie: administrator(login, update user, add user, logout) scenario -> define virtual users behaviours ie: (login, update user, add user, logout) simulation -> is a description of the load test (group of scenarios, users – how many and what rampup) session -> each virtual user is back by a session this can allow for sharing of data between operations (see above) feeders -> method for getting input data for tests ie: login values, search and response values checks -> can verify http response codes and capture elements of the response body assertions -> define acceptance criteria (slower than x means failure) reports -> aggregated output last review for today was of presentation by stephane landelle and romain sertelon,  the authors of gatling: next step is to implement some test and figure out a good way to separate simulations/scenarios and reports. transitioning from standard ca to letencrypt! posted on may 29, 2016 with the go-live of https://letsencrypt.org/ its time to transition from the pricy and manual standard ssl cert issuing model to a fully automated process using the acme protocol. most orgs have numerous usages of ca purchased certs, this post will cover hosts running apache/nginx and aws elbs, all of these usages are to be replaced with automated provisioning and renewal of letsencrypt signed certs. provisioning and auto-renewing apache and nginx tls/ssl certs for externally accessible sites where apache/nginx handles tls/ssl termination moving to letsencrypt is quick and simple: 1 – install the letsencrypt client software (there are rhel and centos rpms – so thats as simple as adding the package to puppet policies or shell yum install letsencrypt 1 yum install letsencrypt 2 – provision the keys and certificates for each of the required virtual hosts. if a virtual host has aliases, specify multiple names with the -d arg. shell letsencrypt certonly --webroot -w /var/www/sites/static -d static.mwclearning.com -d img.mwclearning.com 1 letsencrypt certonly --webroot -w /var/www/sites/static -d static.mwclearning.com -d img.mwclearning.com this will provision a key and certificate + chain to the letsencrypt home directory (defaults /etc/letsencrypt). the /etc/letsencrypt/live directory contains symlinks to the current keys and certs. 3 – update the apache/nginx virtualhost configs to use the symlinks maintained by the letsencrypt client, ie: shell # static web site servername static.mwclearning.com serveralias img.mwclearning.com serveralias registry.mwclearning.ninja # <<-- dummy alias for internal site serveradmin webmaster@mwclearning.ninja documentroot /var/www/sites/static directoryindex index.php index.html allowoverride all options +indexes errorlog /var/log/httpd/static_error.log loglevel warn customlog /var/log/httpd/static_access.log combined servername static.mwclearning.com serveralias img.mwclearning.com serveralias img.mwclearning.ninja serveradmin webmaster@mwclearning.com documentroot /var/www/sites/static directoryindex index.php index.html allowoverride all options +indexes errorlog /var/log/httpd/static_ssl_error.log loglevel warn customlog /var/log/httpd/static_ssl_access.log combined sslengine on sslciphersuite ecdh+aesgcm:dh+aesgcm:ecdh+aes256:dh+aes256:ecdh+aes128:dh+aes:ecdh+3des:dh+3des:rsa+aesgcm:rsa+aes:rsa+3des:!anull:!md5:!dss:!rc4 sslhonorcipherorder on sslinsecurerenegotiation off sslcertificatekeyfile /etc/letsencrypt/live/static.mwclearning.com/privkey.pem sslcertificatefile /etc/letsencrypt/live/static.mwclearning.com/cert.pem sslcertificatechainfile /etc/letsencrypt/live/static.mwclearning.com/chain.pem 123456789101112131415161718192021222324252627282930313233343536373839404142 # static web site servername static.mwclearning.com serveralias img.mwclearning.com serveralias registry.mwclearning.ninja # <<-- dummy alias for internal site serveradmin webmaster@mwclearning.ninja documentroot /var/www/sites/static directoryindex index.php index.html allowoverride all options +indexes errorlog /var/log/httpd/static_error.log loglevel warn customlog /var/log/httpd/static_access.log combined servername static.mwclearning.com serveralias img.mwclearning.com serveralias img.mwclearning.ninja serveradmin webmaster@mwclearning.com documentroot /var/www/sites/static directoryindex index.php index.html allowoverride all options +indexes errorlog /var/log/httpd/static_ssl_error.log loglevel warn customlog /var/log/httpd/static_ssl_access.log combined sslengine on sslciphersuite ecdh+aesgcm:dh+aesgcm:ecdh+aes256:dh+aes256:ecdh+aes128:dh+aes:ecdh+3des:dh+3des:rsa+aesgcm:rsa+aes:rsa+3des:!anull:!md5:!dss:!rc4 sslhonorcipherorder on sslinsecurerenegotiation off sslcertificatekeyfile /etc/letsencrypt/live/static.mwclearning.com/privkey.pem sslcertificatefile /etc/letsencrypt/live/static.mwclearning.com/cert.pem sslcertificatechainfile /etc/letsencrypt/live/static.mwclearning.com/chain.pem 4 – create a script for renewing these certs, something like: shell #!/bin/bash # vars prog_echo=$(which echo) prog_letsencrypt=$(which letsencrypt) prog_find=$(which find) prog_openssl=$(which openssl) # # main # ${prog_echo} "current expiries: " for x in $(${prog_find} /etc/letsencrypt/live/ -name cert.pem); do ${prog_echo} "$x: $(${prog_openssl} x509 -noout -enddate -in $x)";done ${prog_echo} "running letsencrypt certonly --webroot .. on $(hostname)" ${prog_letsencrypt} renew --agree-tos le_status=$? systemctl restart httpd if [ "$le_status" != 0 ]; then ${prog_echo} automated renewal failed: cat /var/log/letsencrypt/renew.log exit 1 else ${prog_echo} "new expiries: " for x in $(${prog_find} /etc/letsencrypt/live/ -name cert.pem); do echo "$x: $(${prog_openssl} x509 -noout -enddate -in $x)";done fi # eof 12345678910111213141516171819202122232425 #!/bin/bash# varsprog_echo=$(which echo)prog_letsencrypt=$(which letsencrypt)prog_find=$(which find)prog_openssl=$(which openssl) ## main#${prog_echo} "current expiries: "for x in $(${prog_find} /etc/letsencrypt/live/ -name cert.pem); do ${prog_echo} "$x: $(${prog_openssl} x509 -noout -enddate -in $x)";done${prog_echo} "running letsencrypt certonly --webroot .. on $(hostname)"${prog_letsencrypt} renew --agree-tosle_status=$?systemctl restart httpdif [ "$le_status" != 0 ]; then ${prog_echo} automated renewal failed: cat /var/log/letsencrypt/renew.log exit 1else ${prog_echo} "new expiries: " for x in $(${prog_find} /etc/letsencrypt/live/ -name cert.pem); do echo "$x: $(${prog_openssl} x509 -noout -enddate -in $x)";donefi# eof 5 – run this script automatically everyday with cron or jenkins 6 – monitoring the results of the script and externally monitor the expiry dates of your certificates (something will go wrong one day) provisioning and auto-renewing aws elastice load balancer tls/ssl certs this has been made very easy by alex gaynor with a handy python script: https://github.com/alex/letsencrypt-aws. this is a great use-case for docker and alex has created a docker image for the script: https://hub.docker.com/r/alexgaynor/letsencrypt-aws/. to use this with ease i created a layer on top creating a new dockerfile: shell # # mwc letsencrypt-aws image # from alexgaynor/letsencrypt-aws:latest maintainer mark env letsencrypt_aws_config="{\"domains\": \ [{\"elb\":{\"name\":\"testextlb\",\"port\":\"443\"}, \ \"hosts\":[\"test.mwc.com\",\"test-app.mwc.com\",\"test-api.mwc.com\"], \ \"key_type\":\"rsa\"}, \ {\"elb\":{\"name\":\"prodextlb\",\"port\":\"443\"}, \ \"hosts\":[\"app.mwc.com\",\"show.mwc.com\",\"show-app.mwc.com\", \ \"app-api.mwc.com\",\"show-api.mwc.com\"], \ \"key_type\":\"rsa\"}], \ \"acme_account_key\":\"s3://config-bucket-abc123/config_items/private_key.pem\"}" env aws_access_key_id="" env aws_secret_access_key="" env aws_default_region="ap-southeast-2" # eof 123456789101112131415161718192021 ## mwc letsencrypt-aws image#from alexgaynor/letsencrypt-aws:latest maintainer mark env letsencrypt_aws_config="{\"domains\": \[{\"elb\":{\"name\":\"testextlb\",\"port\":\"443\"}, \\"hosts\":[\"test.mwc.com\",\"test-app.mwc.com\",\"test-api.mwc.com\"], \\"key_type\":\"rsa\"}, \{\"elb\":{\"name\":\"prodextlb\",\"port\":\"443\"}, \\"hosts\":[\"app.mwc.com\",\"show.mwc.com\",\"show-app.mwc.com\", \\"app-api.mwc.com\",\"show-api.mwc.com\"], \\"key_type\":\"rsa\"}], \\"acme_account_key\":\"s3://config-bucket-abc123/config_items/private_key.pem\"}" env aws_access_key_id=""env aws_secret_access_key=""env aws_default_region="ap-southeast-2"# eof the explanation of these values can be found at https://hub.docker.com/r/alexgaynor/letsencrypt-aws/. its quite important to create a specific iam user to conduct the required route53/s3 and elb actions. this images need to be build on changes: shell sudo docker build -t registry.mwc.ninja:5000/syseng/ao-letsencrypt-aws . sudo docker push registry.mwc.ninja:5000/syseng/ao-letsencrypt-aws 12 sudo docker build -t registry.mwc.ninja:5000/syseng/ao-letsencrypt-aws .sudo docker push registry.mwc.ninja:5000/syseng/ao-letsencrypt-aws with this image built another cron or jenkins job can be run daily executing something like: shell sudo docker pull registry.mwc.ninja:5000/syseng/ao-letsencrypt-aws sudo docker run registry.mwc.ninja:5000/syseng/ao-letsencrypt-aws sleep 10 sudo docker rm $(sudo docker ps -a | grep registry.mwc.ninja:5000/syseng/ao-letsencrypt-aws | awk '{print $1}') 1234 sudo docker pull registry.mwc.ninja:5000/syseng/ao-letsencrypt-awssudo docker run registry.mwc.ninja:5000/syseng/ao-letsencrypt-awssleep 10sudo docker rm $(sudo docker ps -a | grep registry.mwc.ninja:5000/syseng/ao-letsencrypt-aws | awk '{print $1}') again, the job must be monitored along with external monitoring of certificates. see a complete ssl checker at https://github.com/markz0r/tools/tree/master/ssl_check_complete. download all evernote attachments via evernote api with python posted on april 24, 2016 python script for downloading snapshots of all attachments in all of your evernote notebooks. source: https://github.com/markz0r/tools/blob/master/backup_scripts/evernote_backup.py evernote_backup.py python #!/usr/bin/python import json, os, pickle, httplib2, io import evernote.edam.userstore.constants as userstoreconstants import evernote.edam.type.ttypes as types from evernote.api.client import evernoteclient from evernote.edam.notestore.ttypes import notefilter, notesmetadataresultspec from datetime import date # pre-reqs: pip install evernote # api key from https://dev.evernote.com/#apikey os.environ["pythonpath"] = "/library/python/2.7/site-packages" credentials_file=".evernote_creds.json" local_token=".evernote_token.pkl" output_dir=str(date.today())+"_evernote_backup" def prepdest(): if not os.path.exists(output_dir): os.makedirs(output_dir) return true return true # helper function to turn query string parameters into a # source: https://gist.github.com/inkedmn def parse_query_string(authorize_url): uargs = authorize_url.split('?') vals = {} if len(uargs) == 1: raise exception('invalid authorization url') for pair in uargs[1].split('&'): key, value = pair.split('=', 1) vals[key] = value return vals class authtoken(object): def __init__(self, token_list): self.oauth_token_list = token_list def authenticate(): def storetoken(auth_token): with open(local_token, 'wb') as output: pickle.dump(auth_token, output, pickle.highest_protocol) def oauthflow(): with open(credentials_file) as data_file: data = json.load(data_file) client = evernoteclient( consumer_key = data.get('consumer_key'), consumer_secret = data.get('consumer_secret'), sandbox=false ) request_token = client.get_request_token('https://assetowl.com') print(request_token) print("token expired, load in browser: " + client.get_authorize_url(request_token)) print "paste the url after login here:" authurl = raw_input() vals = parse_query_string(authurl) auth_token=client.get_access_token(request_token['oauth_token'],request_token['oauth_token_secret'],vals['oauth_verifier']) storetoken(authtoken(auth_token)) return auth_token def storetoken(auth_token): with open(local_token, 'wb') as output: pickle.dump(auth_token, output, pickle.highest_protocol) def gettoken(): store_token="" if os.path.isfile(local_token): with open(local_token, 'rb') as input: clientt = pickle.load(input) store_token=clientt.oauth_token_list return store_token try: client = evernoteclient(token=gettoken(),sandbox=false) userstore = client.get_user_store() user = userstore.getuser() except exception as e: print(e) client = evernoteclient(token=oauthflow(),sandbox=false) return client def listnotes(client): note_list=[] note_store = client.get_note_store() filter = notefilter() filter.ascending = false spec = notesmetadataresultspec(includetitle=true) spec.includetitle = true notes = note_store.findnotesmetadata(client.token, filter, 0, 100, spec) for note in notes.notes: for resource in note_store.getnote(client.token, note.guid, false, false, true, false).resources: note_list.append([resource.attributes.filename, resource.guid]) return note_list def downloadresources(web_prefix, res_array): for res in res_array: res_url = "%sres/%s" % (web_prefix, res[1]) print("downloading: " + res_url + " to " + output_dir + res[0]) h = httplib2.http(".cache") (resp_headers, content) = h.request(res_url, "post", headers={'auth': dev_token}) with open(os.path.join(output_dir, res[0]), "wb") as wer: wer.write(content) def main(): if prepdest(): client = authenticate() user_store=client.get_user_store() web_prefix = user_store.getpublicuserinfo(user_store.getuser().username).webapiurlprefix downloadresources(web_prefix, listnotes(client)) if __name__ == '__main__': main() 123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116 #!/usr/bin/pythonimport json, os, pickle, httplib2, ioimport evernote.edam.userstore.constants as userstoreconstantsimport evernote.edam.type.ttypes as typesfrom evernote.api.client import evernoteclientfrom evernote.edam.notestore.ttypes import notefilter, notesmetadataresultspecfrom datetime import date # pre-reqs: pip install evernote # api key from https://dev.evernote.com/#apikey os.environ["pythonpath"] = "/library/python/2.7/site-packages" credentials_file=".evernote_creds.json"local_token=".evernote_token.pkl"output_dir=str(date.today())+"_evernote_backup" def prepdest(): if not os.path.exists(output_dir): os.makedirs(output_dir) return true return true # helper function to turn query string parameters into a # source: https://gist.github.com/inkedmndef parse_query_string(authorize_url): uargs = authorize_url.split('?') vals = {} if len(uargs) == 1: raise exception('invalid authorization url') for pair in uargs[1].split('&'): key, value = pair.split('=', 1) vals[key] = value return vals class authtoken(object): def __init__(self, token_list): self.oauth_token_list = token_list def authenticate(): def storetoken(auth_token): with open(local_token, 'wb') as output: pickle.dump(auth_token, output, pickle.highest_protocol) def oauthflow(): with open(credentials_file) as data_file: data = json.load(data_file) client = evernoteclient( consumer_key = data.get('consumer_key'), consumer_secret = data.get('consumer_secret'), sandbox=false ) request_token = client.get_request_token('https://assetowl.com') print(request_token) print("token expired, load in browser: " + client.get_authorize_url(request_token)) print "paste the url after login here:" authurl = raw_input() vals = parse_query_string(authurl) auth_token=client.get_access_token(request_token['oauth_token'],request_token['oauth_token_secret'],vals['oauth_verifier']) storetoken(authtoken(auth_token)) return auth_token def storetoken(auth_token): with open(local_token, 'wb') as output: pickle.dump(auth_token, output, pickle.highest_protocol) def gettoken(): store_token="" if os.path.isfile(local_token): with open(local_token, 'rb') as input: clientt = pickle.load(input) store_token=clientt.oauth_token_list return store_token try: client = evernoteclient(token=gettoken(),sandbox=false) userstore = client.get_user_store() user = userstore.getuser() except exception as e: print(e) client = evernoteclient(token=oauthflow(),sandbox=false) return client def listnotes(client): note_list=[] note_store = client.get_note_store() filter = notefilter() filter.ascending = false spec = notesmetadataresultspec(includetitle=true) spec.includetitle = true notes = note_store.findnotesmetadata(client.token, filter, 0, 100, spec) for note in notes.notes: for resource in note_store.getnote(client.token, note.guid, false, false, true, false).resources: note_list.append([resource.attributes.filename, resource.guid]) return note_list def downloadresources(web_prefix, res_array): for res in res_array: res_url = "%sres/%s" % (web_prefix, res[1]) print("downloading: " + res_url + " to " + output_dir + res[0]) h = httplib2.http(".cache") (resp_headers, content) = h.request(res_url, "post", headers={'auth': dev_token}) with open(os.path.join(output_dir, res[0]), "wb") as wer: wer.write(content) def main(): if prepdest(): client = authenticate() user_store=client.get_user_store() web_prefix = user_store.getpublicuserinfo(user_store.getuser().username).webapiurlprefix downloadresources(web_prefix, listnotes(client)) if __name__ == '__main__': main() 123…21» --> markz0r () markz0r melbourne, aushttps://mwclearning.com/joined on aug 08, 20115 public repositories0 public gists © 2017 - design by frenchtastic.eu back to top


Here you find all texts from your page as Google (googlebot) and others search engines seen it.

Words density analysis:

Numbers of all words: 6790

One word

Two words phrases

Three words phrases

the - 4.06% (276)
and - 2.34% (159)
for - 1.87% (127)
res - 1.83% (124)
token - 1.71% (116)
not - 1.69% (115)
http - 1.62% (110)
auth - 1.5% (102)
use - 1.4% (95)
get - 1.37% (93)
all - 1.34% (91)
note - 1.16% (79)
with - 1.15% (78)
service - 1.06% (72)
age - 1.05% (71)
port - 1.03% (70)
user - 0.97% (66)
our - 0.96% (65)
name - 0.94% (64)
out - 0.88% (60)
mwc - 0.87% (59)
def - 0.85% (58)
request - 0.84% (57)
kubernetes - 0.82% (56)
header - 0.82% (56)
lets - 0.81% (55)
app - 0.77% (52)
letsencrypt - 0.75% (51)
deploy - 0.74% (50)
val - 0.72% (49)
cat - 0.71% (48)
client - 0.71% (48)
that - 0.69% (47)
auth_token - 0.69% (47)
run - 0.68% (46)
test - 0.65% (44)
are - 0.62% (42)
kubectl - 0.62% (42)
response - 0.6% (41)
key - 0.6% (41)
nginx - 0.59% (40)
config - 0.59% (40)
end - 0.57% (39)
deployment - 0.57% (39)
gatling - 0.57% (39)
aws - 0.56% (38)
cert - 0.56% (38)
- 0.54% (37)
list - 0.54% (37)
check - 0.53% (36)
create - 0.53% (36)
per - 0.53% (36)
url - 0.53% (36)
load - 0.53% (36)
ssl - 0.5% (34)
dir - 0.5% (34)
file - 0.5% (34)
services - 0.49% (33)
static - 0.49% (33)
host - 0.47% (32)
virtual - 0.47% (32)
docker - 0.47% (32)
node - 0.47% (32)
have - 0.46% (31)
clear - 0.46% (31)
running - 0.46% (31)
requests - 0.46% (31)
able - 0.46% (31)
evernote - 0.46% (31)
this - 0.44% (30)
set - 0.41% (28)
cluster - 0.41% (28)
from - 0.41% (28)
data - 0.41% (28)
api - 0.38% (26)
one - 0.38% (26)
output - 0.38% (26)
status - 0.38% (26)
access - 0.37% (25)
exec - 0.37% (25)
script - 0.37% (25)
secret - 0.37% (25)
open - 0.37% (25)
headers - 0.37% (25)
compute - 0.35% (24)
low - 0.35% (24)
pod - 0.35% (24)
info - 0.34% (23)
how - 0.34% (23)
123 - 0.34% (23)
web - 0.34% (23)
post - 0.34% (23)
import - 0.34% (23)
some - 0.34% (23)
ids - 0.32% (22)
login - 0.32% (22)
new - 0.32% (22)
string - 0.32% (22)
can - 0.32% (22)
proxy - 0.32% (22)
image - 0.32% (22)
json - 0.32% (22)
resource - 0.31% (21)
base - 0.31% (21)
spec - 0.31% (21)
gatling. - 0.31% (21)
coreos - 0.31% (21)
then - 0.31% (21)
cloud - 0.31% (21)
pods - 0.31% (21)
work - 0.29% (20)
direct - 0.29% (20)
error - 0.29% (20)
was - 0.29% (20)
time - 0.29% (20)
etc - 0.29% (20)
main - 0.28% (19)
1234 - 0.28% (19)
date - 0.28% (19)
shell - 0.28% (19)
search - 0.28% (19)
monitor - 0.28% (19)
container - 0.28% (19)
start - 0.27% (18)
notes - 0.27% (18)
12345 - 0.27% (18)
echo - 0.27% (18)
answer_ids - 0.27% (18)
level - 0.27% (18)
print - 0.27% (18)
site - 0.27% (18)
turn - 0.25% (17)
users - 0.25% (17)
vagrant - 0.25% (17)
more - 0.25% (17)
letsencrypt-aws - 0.25% (17)
12345678 - 0.25% (17)
which - 0.25% (17)
application - 0.25% (17)
directory - 0.24% (16)
authheader - 0.24% (16)
own - 0.24% (16)
200 - 0.24% (16)
sudo - 0.24% (16)
microservices - 0.24% (16)
/... - 0.24% (16)
443 - 0.24% (16)
virtualbox - 0.24% (16)
123456789 - 0.22% (15)
there - 0.22% (15)
need - 0.22% (15)
read - 0.22% (15)
arg - 0.22% (15)
return - 0.22% (15)
will - 0.22% (15)
certs - 0.22% (15)
dev - 0.21% (14)
monitoring - 0.21% (14)
chain - 0.21% (14)
containers - 0.21% (14)
local - 0.21% (14)
using - 0.21% (14)
false - 0.21% (14)
//... - 0.21% (14)
scala - 0.21% (14)
value - 0.19% (13)
session - 0.19% (13)
down - 0.19% (13)
sources - 0.19% (13)
why - 0.19% (13)
registry - 0.19% (13)
now - 0.19% (13)
want - 0.19% (13)
--> - 0.19% (13)
next - 0.19% (13)
describe - 0.19% (13)
env - 0.19% (13)
balance - 0.19% (13)
deployments - 0.19% (13)
scaling - 0.19% (13)
body - 0.19% (13)
information - 0.19% (13)
nodeport - 0.18% (12)
2016 - 0.18% (12)
instance - 0.18% (12)
system - 0.18% (12)
each - 0.18% (12)
static.mwclearning.com - 0.18% (12)
+ ">/> --> - 0.12% (8)
with the - 0.12% (8)
# create - 0.12% (8)
kubectl describe - 0.12% (8)
client = - 0.12% (8)
from the - 0.12% (8)
# list - 0.12% (8)
running virtualbox - 0.12% (8)
http("list_irs") .get(uri1 - 0.1% (7)
at the - 0.1% (7)
"/information-requests") .headers(authheader("${auth_token}")) - 0.1% (7)
javascript error - 0.1% (7)
the service - 0.1% (7)
with a - 0.1% (7)
the world - 0.1% (7)
we have - 0.1% (7)
next step - 0.1% (7)
kubectl create - 0.09% (6)
the recorder - 0.09% (6)
asia-east1-c n1-standard-1 - 0.09% (6)
and run - 0.09% (6)
initate kubernetes - 0.09% (6)
get pods - 0.09% (6)
with open(local_token, - 0.09% (6)
cluster on - 0.09% (6)
#expected output - 0.09% (6)
for the - 0.09% (6)
# initate - 0.09% (6)
there are - 0.09% (6)
does not - 0.09% (6)
.headers(authheader("${auth_token}")).check(status.is(200), jsonpath("$..id").findall.saveas("answer_ids")) - 0.09% (6)
- 0.09% (6)
- 0.09% (6)
the kubernetes - 0.09% (6)
http request - 0.09% (6)
we can - 0.09% (6)
google cloud - 0.07% (5)
i want - 0.07% (5)
and container - 0.07% (5)
all of - 0.07% (5)
that the - 0.07% (5)
a vector - 0.07% (5)
request and - 0.07% (5)
that is - 0.07% (5)
a single - 0.07% (5)
how to - 0.07% (5)
on all - 0.07% (5)
virtual user - 0.07% (5)
using the - 0.07% (5)
all the - 0.07% (5)
introduction to - 0.07% (5)
response body - 0.07% (5)
"/information-requests") .headers(authheader("${auth_token}")).check(status.is(200), - 0.07% (5)
a service - 0.07% (5)
number of - 0.07% (5)
http response - 0.07% (5)
the response - 0.07% (5)
return true - 0.06% (4)
x509 -noout - 0.06% (4)
ready 4m - 0.06% (4)
-name cert.pem); - 0.06% (4)
the use - 0.06% (4)
-> baseurl) - 0.06% (4)
bunch of - 0.06% (4)
vals = - 0.06% (4)
$(${prog_find} /etc/letsencrypt/live/ - 0.06% (4)
responses --> - 0.06% (4)
certonly --webroot - 0.06% (4)
name: "nginx-proxy-conf" - 0.06% (4)
name: "tls-certs" - 0.06% (4)
defined in - 0.06% (4)
loglevel warn - 0.06% (4)
name="io.gatling.http.response" level="trace" - 0.06% (4)
-enddate -in - 0.06% (4)
in $(${prog_find} - 0.06% (4)
load balancer - 0.06% (4)
{ answer_chain - 0.06% (4)
= scenario("basiclogin") - 0.06% (4)
compute instances - 0.06% (4)
val scn - 0.06% (4)
+ "/information-requests/${item}/stores/answers") - 0.06% (4)
exec(http("an_answer") .get(uri1 - 0.06% (4)
answer_chain = - 0.06% (4)
use the - 0.06% (4)
it staff - 0.06% (4)
-in $x)";done - 0.06% (4)
= http - 0.06% (4)
val answer_chain - 0.06% (4)
= exec(http("an_answer") - 0.06% (4)
-noout -enddate - 0.06% (4)
$(${prog_openssl} x509 - 0.06% (4)
cert.pem); do - 0.06% (4)
/etc/letsencrypt/live/ -name - 0.06% (4)
port: 443 - 0.06% (4)
expiries: " - 0.06% (4)
of get - 0.06% (4)
and responses - 0.06% (4)
+indexes - 0.06% (4)
} //... - 0.06% (4)
{ map("authorization" - 0.06% (4)
string] = - 0.06% (4)
i need - 0.06% (4)
the basic - 0.06% (4)
i have - 0.06% (4)
authheader (auth_token:string):map[string, - 0.06% (4)
single page - 0.06% (4)
answer_chain } - 0.06% (4)
user experience - 0.06% (4)
.foreach("${answer_ids}","item") { - 0.06% (4)
jsonpath("$..id").findall.saveas("answer_ids")) //... - 0.06% (4)
based on - 0.06% (4)
letsencrypt certonly - 0.06% (4)
to get - 0.06% (4)
started with - 0.06% (4)
load test - 0.06% (4)
">"origin" -> - 0.06% (4)
this is - 0.06% (4)
.body(rawfilebody("test_user_creds.txt")) .check(headerregex("set-cookie", - 0.06% (4)
"bearer ".concat(auth_token), - 0.06% (4)
.exec(http("login-with-creds") .post("/cm/login") - 0.06% (4)
.check(headerregex("set-cookie", "access_token=(.*);version=*").saveas("auth_token")) - 0.06% (4)
instead of - 0.06% (4)
time a - 0.06% (4)
(auth_token:string):map[string, string] - 0.06% (4)
on top - 0.06% (4)
servername static.mwclearning.com - 0.06% (4)
all options - 0.06% (4)
static.mwclearning.com serveralias - 0.06% (4)
name="io.gatling.http.ahc" level="trace" - 0.06% (4)
/var/www/sites/static> allowoverride - 0.06% (4)
index.html - 0.06% (4)
directoryindex index.php - 0.06% (4)
">map("authorization" -> - 0.06% (4)
documentroot /var/www/sites/static - 0.06% (4)
]; then - 0.06% (4)
kubectl run - 0.06% (4)
img.mwclearning.com serveralias - 0.06% (4)
# expose - 0.06% (4)
uncomment for - 0.06% (4)
list service - 0.06% (4)
-> "bearer - 0.06% (4)
service my-nginx - 0.06% (4)
warn customlog - 0.06% (4)
that we - 0.06% (4)
errorlog - 0.06% (4)
options +indexes - 0.06% (4)
allowoverride all - 0.06% (4)
- 0.06% (4)
index.php index.html - 0.06% (4)
".concat(auth_token), "origin" - 0.06% (4)
/var/www/sites/static directoryindex - 0.06% (4)
and containers - 0.06% (4)
serveralias img.mwclearning.com - 0.06% (4)
def authheader - 0.06% (4)
"$x: $(${prog_openssl} - 0.06% (4)
def storetoken(auth_token): - 0.06% (4)
on october - 0.06% (4)
pickle.highest_protocol) def - 0.06% (4)
a load - 0.06% (4)
describe secrets - 0.06% (4)
create secret - 0.06% (4)
pickle.dump(auth_token, output, - 0.06% (4)
just created - 0.06% (4)
as output: - 0.06% (4)
open(local_token, 'wb') - 0.06% (4)
output, pickle.highest_protocol) - 0.06% (4)
cloud shell - 0.06% (4)
configmap nginx-proxy-conf - 0.06% (4)
storetoken(auth_token): with - 0.06% (4)
'wb') as - 0.06% (4)
how do - 0.06% (4)
want to - 0.06% (4)
80 --type - 0.06% (4)
expose deployment - 0.06% (4)
kubectl expose - 0.06% (4)
output: pickle.dump(auth_token, - 0.06% (4)
world on - 0.06% (4)
end user - 0.04% (3)
note that - 0.04% (3)
and the - 0.04% (3)
about the - 0.04% (3)
the same - 0.04% (3)
the previous - 0.04% (3)
look at - 0.04% (3)
2016 with - 0.04% (3)
the basics - 0.04% (3)
gatling – - 0.04% (3)
– so - 0.04% (3)
requests for - 0.04% (3)
more than - 0.04% (3)
docker and - 0.04% (3)
desired state - 0.04% (3)
start with - 0.04% (3)
set the - 0.04% (3)
ssh c1 - 0.04% (3)
as defined - 0.04% (3)
monitoring, scaling - 0.04% (3)
123456789 # - 0.04% (3)
set up - 0.04% (3)
vagrant ssh - 0.04% (3)
into a - 0.04% (3)
on response - 0.04% (3)
lot of - 0.04% (3)
will be - 0.04% (3)
want a - 0.04% (3)
dealing with - 0.04% (3)
baseurl) } - 0.04% (3)
the simulation - 0.04% (3)
i could - 0.04% (3)
a deployment - 0.04% (3)
the correct - 0.04% (3)
to ensure - 0.04% (3)
step is - 0.04% (3)
outside of - 0.04% (3)
working through - 0.04% (3)
page app - 0.04% (3)
why am - 0.04% (3)
the course - 0.04% (3)
staff have - 0.04% (3)
pods and - 0.04% (3)
if not - 0.04% (3)
and not - 0.04% (3)
an exec - 0.04% (3)
external ip - 0.04% (3)
should be - 0.04% (3)
} 12345678910 - 0.04% (3)
we are - 0.04% (3)
intro to - 0.04% (3)
with kubernetes - 0.04% (3)
scalable microservices - 0.04% (3)
response time - 0.04% (3)
why do - 0.04% (3)
now that - 0.04% (3)
can use - 0.04% (3)
the http - 0.04% (3)
the google - 0.04% (3)
provisioning and - 0.04% (3)
by the - 0.04% (3)
the first - 0.04% (3)
for each - 0.04% (3)
to microservices - 0.04% (3)
spec = - 0.03% (2)
ids we - 0.03% (2)
notes = - 0.03% (2)
intel mac - 0.03% (2)
search { - 0.03% (2)
val feeder - 0.03% (2)
= csv("search.csv").random - 0.03% (2)
val search - 0.03% (2)
.useragentheader("mozilla/5.0 (macintosh; - 0.03% (2)
can request - 0.03% (2)
*/*") .acceptencodingheader("gzip, - 0.03% (2)
deflate") .acceptlanguageheader("en-us,en;q=0.5") - 0.03% (2)
.acceptheader("application/json, text/plain, - 0.03% (2)
full of - 0.03% (2)
//... another - 0.03% (2)
= exec(http("home") - 0.03% (2)
notesmetadataresultspec(includetitle=true) spec.includetitle - 0.03% (2)
= true - 0.03% (2)
get for - 0.03% (2)
.get("/")) .pause(1) - 0.03% (2)
post auth - 0.03% (2)
deps http("list_irs") - 0.03% (2)
10.11; rv:43.0) - 0.03% (2)
"/information-requests/${item}/stores/answers") .headers(authheader("${auth_token}")).check(status.is(200), - 0.03% (2)
gecko/20100101 firefox/43.0") - 0.03% (2)
note_store.findnotesmetadata(client.token, filter, - 0.03% (2)
spec) for - 0.03% (2)
0, 100, - 0.03% (2)
//... finally - 0.03% (2)
js css - 0.03% (2)
note in - 0.03% (2)
simulation params - 0.03% (2)
and assertions - 0.03% (2)
notes.notes: for - 0.03% (2)
setup(scn.inject(atonceusers(10))).protocols(httpprotocol).assertions( global.responsetime.max.lessthan(2000), - 0.03% (2)
next steps - 0.03% (2)
place to - 0.03% (2)
simple and - 0.03% (2)
//... bunch - 0.03% (2)
= false - 0.03% (2)
etc .exec(http("login-with-creds") - 0.03% (2)
resource in - 0.03% (2)
scenario("basiclogin") .exec(http("get_web_app_deps") - 0.03% (2)
up and - 0.03% (2)
those resources - 0.03% (2)
notefilter() filter.ascending - 0.03% (2)
filter = - 0.03% (2)
jsonpath("$..status"))) val - 0.03% (2)
.post("/cm/login") .body(rawfilebody("test_user_creds.txt")) - 0.03% (2)
__init__(self, token_list): - 0.03% (2)
res_array: res_url - 0.03% (2)
http .baseurl(baseurl) - 0.03% (2)
.acceptlanguageheader("en-us,en;q=0.5") .useragentheader("mozilla/5.0 - 0.03% (2)
pre-reqs: pip - 0.03% (2)
simulation { - 0.03% (2)
val baseurl="https://blah.mwclearning.com" - 0.03% (2)
val httpprotocol - 0.03% (2)
date # - 0.03% (2)
.baseurl(baseurl) .acceptheader("application/json, - 0.03% (2)
text/plain, */*") - 0.03% (2)
.acceptencodingheader("gzip, deflate") - 0.03% (2)
(macintosh; intel - 0.03% (2)
basiclogin extends - 0.03% (2)
datetime import - 0.03% (2)
'__main__': main() - 0.03% (2)
mac os - 0.03% (2)
x 10.11; - 0.03% (2)
rv:43.0) gecko/20100101 - 0.03% (2)
firefox/43.0") def - 0.03% (2)
__name__ == - 0.03% (2)
.headers(authheader("${auth_token}")).check(status.is(200), jsonpath("$..status"))) - 0.03% (2)
.exec(http("get_web_app_deps") //... - 0.03% (2)
install evernote - 0.03% (2)
io.gatling.jdbc.predef._ class - 0.03% (2)
get requests - 0.03% (2)
prepdest(): if - 0.03% (2)
the power - 0.03% (2)
next up - 0.03% (2)
query string - 0.03% (2)
to turn - 0.03% (2)
helper function - 0.03% (2)
true # - 0.03% (2)
true return - 0.03% (2)
os.makedirs(output_dir) return - 0.03% (2)
not os.path.exists(output_dir): - 0.03% (2)
user experience. - 0.03% (2)
the test - 0.03% (2)
= "/library/python/2.7/site-packages" - 0.03% (2)
https://dev.evernote.com/#apikey os.environ["pythonpath"] - 0.03% (2)
along with - 0.03% (2)
performance testing - 0.03% (2)
key from - 0.03% (2)
http layer - 0.03% (2)
is incomplete - 0.03% (2)
to conduct - 0.03% (2)
the request - 0.03% (2)
listnotes(client)) if - 0.03% (2)
user_store.getpublicuserinfo(user_store.getuser().username).webapiurlprefix downloadresources(web_prefix, - 0.03% (2)
note_store.getnote(client.token, note.guid, - 0.03% (2)
res_array): for - 0.03% (2)
finally set - 0.03% (2)
params and - 0.03% (2)
res_url + - 0.03% (2)
res[1]) print("downloading: - 0.03% (2)
% (web_prefix, - 0.03% (2)
= "%sres/%s" - 0.03% (2)
res in - 0.03% (2)
assertions setup(scn.inject(atonceusers(10))).protocols(httpprotocol).assertions( - 0.03% (2)
def downloadresources(web_prefix, - 0.03% (2)
res[0]) h - 0.03% (2)
global.responsetime.max.lessthan(2000), global.successfulrequests.percent.greaterthan(99)) - 0.03% (2)
class basiclogin - 0.03% (2)
extends simulation - 0.03% (2)
return note_list - 0.03% (2)
note_list.append([resource.attributes.filename, resource.guid]) - 0.03% (2)
baseurl="https://blah.mwclearning.com" val - 0.03% (2)
true, false).resources: - 0.03% (2)
false, false, - 0.03% (2)
httpprotocol = - 0.03% (2)
output_dir + - 0.03% (2)
= httplib2.http(".cache") - 0.03% (2)
for js - 0.03% (2)
another bunch - 0.03% (2)
web_prefix = - 0.03% (2)
authenticate() user_store=client.get_user_store() - 0.03% (2)
if prepdest(): - 0.03% (2)
css etc - 0.03% (2)
"access_token=(.*);version=*").saveas("auth_token")) //... - 0.03% (2)
def main(): - 0.03% (2)
wer: wer.write(content) - 0.03% (2)
"wb") as - 0.03% (2)
open(os.path.join(output_dir, res[0]), - 0.03% (2)
dev_token}) with - 0.03% (2)
(resp_headers, content) - 0.03% (2)
for post - 0.03% (2)
"post", headers={'auth': - 0.03% (2)
auth deps - 0.03% (2)
//... now - 0.03% (2)
= h.request(res_url, - 0.03% (2)
vector full - 0.03% (2)
of ids - 0.03% (2)
request those - 0.03% (2)
resources .foreach("${answer_ids}","item") - 0.03% (2)
.feed(feeder) .exec(http("search") - 0.03% (2)
the required - 0.03% (2)
.get("/computers?f=${searchcriterion}") .check(css("a:contains('${searchcomputername}')", - 0.03% (2)
= json.load(data_file) - 0.03% (2)
from alexgaynor/letsencrypt-aws:latest - 0.03% (2)
maintainer mark - 0.03% (2)
env letsencrypt_aws_config="{\"domains\": - 0.03% (2)
"">env aws_access_key_id="" - 0.03% (2)
"">env aws_secret_access_key="" - 0.03% (2)
env aws_default_region="ap-southeast-2" - 0.03% (2)
shell sudo - 0.03% (2)
docker build - 0.03% (2)
-t registry.mwc.ninja:5000/syseng/ao-letsencrypt-aws - 0.03% (2)
created a - 0.03% (2)
docker push - 0.03% (2)
data_file: data - 0.03% (2)
build -t - 0.03% (2)
this image - 0.03% (2)
pull registry.mwc.ninja:5000/syseng/ao-letsencrypt-aws - 0.03% (2)
run registry.mwc.ninja:5000/syseng/ao-letsencrypt-aws - 0.03% (2)
sleep 10 - 0.03% (2)
rm $(sudo - 0.03% (2)
mwc letsencrypt-aws - 0.03% (2)
ease i - 0.03% (2)
grep registry.mwc.ninja:5000/syseng/ao-letsencrypt-aws - 0.03% (2)
data.get('consumer_secret'), sandbox=false - 0.03% (2)
in browser: - 0.03% (2)
/var/log/letsencrypt/renew.log exit - 0.03% (2)
${prog_echo} "new - 0.03% (2)
expired, load - 0.03% (2)
print(request_token) print("token - 0.03% (2)
= client.get_request_token('https://assetowl.com') - 0.03% (2)
echo "$x: - 0.03% (2)
) request_token - 0.03% (2)
consumer_secret = - 0.03% (2)
to use - 0.03% (2)
= data.get('consumer_key'), - 0.03% (2)
${prog_echo} "$x: - 0.03% (2)
"new expiries: - 0.03% (2)
do echo - 0.03% (2)
evernoteclient( consumer_key - 0.03% (2)
cron or - 0.03% (2)
script and - 0.03% (2)
go wrong - 0.03% (2)
a docker - 0.03% (2)
docker ps - 0.03% (2)
'{print $1}') - 0.03% (2)
automated renewal - 0.03% (2)
url') for - 0.03% (2)
turn query - 0.03% (2)
string parameters - 0.03% (2)
# source: - 0.03% (2)
parse_query_string(authorize_url): uargs - 0.03% (2)
= authorize_url.split('?') - 0.03% (2)
len(uargs) == - 0.03% (2)
1: raise - 0.03% (2)
exception('invalid authorization - 0.03% (2)
pair in - 0.03% (2)
# helper - 0.03% (2)
uargs[1].split('&'): key, - 0.03% (2)
authenticate(): def - 0.03% (2)
value = - 0.03% (2)
pair.split('=', 1) - 0.03% (2)
token_list def - 0.03% (2)
self.oauth_token_list = - 0.03% (2)
vals[key] = - 0.03% (2)
value return - 0.03% (2)
vals class - 0.03% (2)
function to - 0.03% (2)
os.path.exists(output_dir): os.makedirs(output_dir) - 0.03% (2)
docker pull - 0.03% (2)
open(credentials_file) as - 0.03% (2)
python script - 0.03% (2)
json, os, - 0.03% (2)
pickle, httplib2, - 0.03% (2)
evernote.edam.userstore.constants as - 0.03% (2)
evernote.edam.type.ttypes as - 0.03% (2)
evernote.api.client import - 0.03% (2)
evernote.edam.notestore.ttypes import - 0.03% (2)
notefilter, notesmetadataresultspec - 0.03% (2)
from datetime - 0.03% (2)
def prepdest(): - 0.03% (2)
import date - 0.03% (2)
# pre-reqs: - 0.03% (2)
pip install - 0.03% (2)
evernote # - 0.03% (2)
oauthflow(): with - 0.03% (2)
api key - 0.03% (2)
from https://dev.evernote.com/#apikey - 0.03% (2)
- 0.03% (2)
"/library/python/2.7/site-packages" credentials_file=".evernote_creds.json" - 0.03% (2)
failed: cat - 0.03% (2)
then ${prog_echo} - 0.03% (2)
= client.get_note_store() - 0.03% (2)
for internal - 0.03% (2)
web site - 0.03% (2)
- 0.03% (2)
= evernoteclient(token=oauthflow(),sandbox=false) - 0.03% (2)
print(e) client - 0.03% (2)
serveralias registry.mwclearning.ninja - 0.03% (2)
# <<-- - 0.03% (2)
dummy alias - 0.03% (2)
except exception - 0.03% (2)
site serveradmin - 0.03% (2)
and certificate - 0.03% (2)
webmaster@mwclearning.ninja documentroot - 0.03% (2)
= userstore.getuser() - 0.03% (2)
client.get_user_store() user - 0.03% (2)
userstore = - 0.03% (2)
= evernoteclient(token=gettoken(),sandbox=false) - 0.03% (2)
try: client - 0.03% (2)
/var/log/httpd/static_error.log loglevel - 0.03% (2)
return store_token - 0.03% (2)
/var/log/httpd/static_access.log combined - 0.03% (2)
# static - 0.03% (2)
static.mwclearning.com -d - 0.03% (2)
- 0.03% (2)
test and - 0.03% (2)
"href").saveas("computerurl"))) .pause(2) - 0.03% (2)
.exec(http("select") .get("${computerurl}")) - 0.03% (2)
note_list=[] note_store - 0.03% (2)
behaviours ie: - 0.03% (2)
def listnotes(client): - 0.03% (2)
user, add - 0.03% (2)
user, logout) - 0.03% (2)
return client - 0.03% (2)
a good - 0.03% (2)
/var/www/sites/static -d - 0.03% (2)
and nginx - 0.03% (2)
tls/ssl certs - 0.03% (2)
letsencrypt client - 0.03% (2)
install letsencrypt - 0.03% (2)
keys and - 0.03% (2)
each of - 0.03% (2)
authtoken(object): def - 0.03% (2)
virtual host - 0.03% (2)
--webroot -w - 0.03% (2)
pickle.load(input) store_token=clientt.oauth_token_list - 0.03% (2)
*:443> servername - 0.03% (2)
- 0.03% (2)
= parse_query_string(authurl) - 0.03% (2)
return auth_token - 0.03% (2)
customlog /var/log/httpd/static_access.log - 0.03% (2)
script for - 0.03% (2)
like: shell - 0.03% (2)
auth_token=client.get_access_token(request_token['oauth_token'],request_token['oauth_token_secret'],vals['oauth_verifier']) storetoken(authtoken(auth_token)) - 0.03% (2)
openssl) # - 0.03% (2)
# main - 0.03% (2)
"current expiries: - 0.03% (2)
raw_input() vals - 0.03% (2)
serveradmin webmaster@mwclearning.ninja - 0.03% (2)
authurl = - 0.03% (2)
do ${prog_echo} - 0.03% (2)
login here:" - 0.03% (2)
url after - 0.03% (2)
"paste the - 0.03% (2)
"running letsencrypt - 0.03% (2)
client.get_authorize_url(request_token)) print - 0.03% (2)
renew --agree-tos - 0.03% (2)
restart httpd - 0.03% (2)
errorlog /var/log/httpd/static_error.log - 0.03% (2)
internal site - 0.03% (2)
clientt = - 0.03% (2)
/var/log/httpd/static_ssl_access.log combined - 0.03% (2)
as input: - 0.03% (2)
open(local_token, 'rb') - 0.03% (2)
os.path.isfile(local_token): with - 0.03% (2)
store_token="" if - 0.03% (2)
img.mwclearning.ninja serveradmin - 0.03% (2)
def gettoken(): - 0.03% (2)
webmaster@mwclearning.com documentroot - 0.03% (2)
/var/log/httpd/static_ssl_error.log loglevel - 0.03% (2)
sslengine on - 0.03% (2)
alias for - 0.03% (2)
sslciphersuite ecdh+aesgcm:dh+aesgcm:ecdh+aes256:dh+aes256:ecdh+aes128:dh+aes:ecdh+3des:dh+3des:rsa+aesgcm:rsa+aes:rsa+3des:!anull:!md5:!dss:!rc4 - 0.03% (2)
sslhonorcipherorder on - 0.03% (2)
sslinsecurerenegotiation off - 0.03% (2)
sslcertificatekeyfile /etc/letsencrypt/live/static.mwclearning.com/privkey.pem - 0.03% (2)
sslcertificatefile /etc/letsencrypt/live/static.mwclearning.com/cert.pem - 0.03% (2)
sslcertificatechainfile /etc/letsencrypt/live/static.mwclearning.com/chain.pem - 0.03% (2)
*:80> servername - 0.03% (2)
registry.mwclearning.ninja # - 0.03% (2)
<<-- dummy - 0.03% (2)
important to - 0.03% (2)
itops random - 0.03% (2)
body content - 0.03% (2)
spec: selector: - 0.03% (2)
configmap: name: - 0.03% (2)
"nginx-proxy-conf" items: - 0.03% (2)
- key: - 0.03% (2)
"proxy.conf" path: - 0.03% (2)
a specific - 0.03% (2)
added to - 0.03% (2)
are currently - 0.03% (2)
create -f - 0.03% (2)
name: "monolith" - 0.03% (2)
app: "monolith" - 0.03% (2)
secret: secretname: - 0.03% (2)
secure: "enabled" - 0.03% (2)
ports: - - 0.03% (2)
protocol: "tcp" - 0.03% (2)
targetport: 443 - 0.03% (2)
nodeport: 31000 - 0.03% (2)
type: nodeport - 0.03% (2)
# open - 0.03% (2)
the nodeport - 0.03% (2)
port to - 0.03% (2)
"tls-certs" - - 0.03% (2)
volumes: - - 0.03% (2)
firewall-rules create - 0.03% (2)
you have - 0.03% (2)
--type loadbalancer - 0.03% (2)
get services - 0.03% (2)
clusters create - 0.03% (2)
nginx --port - 0.03% (2)
loadbalancer# list - 0.03% (2)
create secrets - 0.03% (2)
for all - 0.03% (2)
files in - 0.03% (2)
generic tls-certs - 0.03% (2)
secrets tls-certs - 0.03% (2)
must be - 0.03% (2)
a configmap - 0.03% (2)
describe the - 0.03% (2)
configmap just - 0.03% (2)
created kubectl - 0.03% (2)
describe configmap - 0.03% (2)
on gce# - 0.03% (2)
secret generic - 0.03% (2)
just createdkubectl - 0.03% (2)
create configmap - 0.03% (2)
createdkubectl describe - 0.03% (2)
cluster nodes - 0.03% (2)
allow-monolith-nodeport --allow=tcp:31000 - 0.03% (2)
deployment nginx - 0.03% (2)
needs to - 0.03% (2)
pods -l - 0.03% (2)
describe pods - 0.03% (2)
pods meeting - 0.03% (2)
service label - 0.03% (2)
pods secure-monolith - 0.03% (2)
a microservices - 0.03% (2)
not in - 0.03% (2)
and coreos - 0.03% (2)
the next - 0.03% (2)
and service - 0.03% (2)
meeting service - 0.03% (2)
image management - 0.03% (2)
(with immutable - 0.03% (2)
have low - 0.03% (2)
baseline work, - 0.03% (2)
more room - 0.03% (2)
for initiatives - 0.03% (2)
shell vagrant - 0.03% (2)
global-status #expected - 0.03% (2)
output assuming - 0.03% (2)
label definition - 0.03% (2)
10.140.0.3 104.199.150.12 - 0.03% (2)
of compute - 0.03% (2)
104.199.150.12 running - 0.03% (2)
nodes gcloud - 0.03% (2)
zone machine_type - 0.03% (2)
preemptible internal_ip - 0.03% (2)
external_ip status - 0.03% (2)
gke-k0-default-pool-0bcbb955-32j6 asia-east1-c - 0.03% (2)
n1-standard-1 10.140.0.4 - 0.03% (2)
104.199.198.133 running - 0.03% (2)
gke-k0-default-pool-0bcbb955-7ebn asia-east1-c - 0.03% (2)
n1-standard-1 10.140.0.3 - 0.03% (2)
gke-k0-default-pool-0bcbb955-h7ss asia-east1-c - 0.03% (2)
internal_ip external_ip - 0.03% (2)
n1-standard-1 10.140.0.2 - 0.03% (2)
104.155.208.48 running - 0.03% (2)
12345678910111213141516171819202122232425 # - 0.03% (2)
gce# create - 0.03% (2)
open the - 0.03% (2)
nodeport port - 0.03% (2)
all cluster - 0.03% (2)
nodesgcloud compute - 0.03% (2)
list external - 0.03% (2)
machine_type preemptible - 0.03% (2)
--port 80 - 0.03% (2)
by gce - 0.03% (2)
controller and - 0.03% (2)
and running - 0.03% (2)
database technology - 0.03% (2)
21, 2016 - 0.03% (2)
be accomplished - 0.03% (2)
used for - 0.03% (2)
more effective - 0.03% (2)
are used - 0.03% (2)
to define - 0.03% (2)
services are - 0.03% (2)
used to - 0.03% (2)
kubectl apply - 0.03% (2)
computer technologies - 0.03% (2)
that needs - 0.03% (2)
resources in - 0.03% (2)
so… i - 0.03% (2)
types of - 0.03% (2)
the deployment - 0.03% (2)
running kubectl - 0.03% (2)
this will - 0.03% (2)
the end - 0.03% (2)
it was - 0.03% (2)
and o/s - 0.03% (2)
application development - 0.03% (2)
on kubernetes - 0.03% (2)
unit – - 0.03% (2)
mining courses - 0.03% (2)
devops functional - 0.03% (2)
programming – - 0.03% (2)
scala introduction - 0.03% (2)
to parallel - 0.03% (2)
programming uni - 0.03% (2)
advanced network - 0.03% (2)
security network - 0.03% (2)
security reading - 0.03% (2)
dos research - 0.03% (2)
design internet - 0.03% (2)
natural computation - 0.03% (2)
for intell. - 0.03% (2)
sys. intelligent - 0.03% (2)
systems grid - 0.03% (2)
computing it - 0.03% (2)
research methods - 0.03% (2)
foundations of - 0.03% (2)
programming data - 0.03% (2)
communications systems - 0.03% (2)
analysis and - 0.03% (2)
not very - 0.03% (2)
in aws. - 0.03% (2)
balancer provisioned - 0.03% (2)
to run - 0.03% (2)
\ --image - 0.03% (2)
compute ssh - 0.03% (2)
ubuntu # - 0.03% (2)
starting an - 0.03% (2)
instance like - 0.03% (2)
this make - 0.03% (2)
is open - 0.03% (2)
all ports - 0.03% (2)
set session - 0.03% (2)
october 20, - 0.03% (2)
ubuntu \ - 0.03% (2)
with microservices - 0.03% (2)
service discovery - 0.03% (2)
a cluster - 0.03% (2)
container clusters - 0.03% (2)
create k0 - 0.03% (2)
# launch - 0.03% (2)
run nginx - 0.03% (2)
list pods - 0.03% (2)
expose nginx - 0.03% (2)
world via - 0.03% (2)
--image-project ubuntu-os-cloud - 0.03% (2)
instances create - 0.03% (2)
a number - 0.03% (2)
container engine - 0.03% (2)
zone clusters - 0.03% (2)
i will - 0.03% (2)
microservices and - 0.03% (2)
containers with - 0.03% (2)
20, 2016 - 0.03% (2)
there is - 0.03% (2)
the tools - 0.03% (2)
run our - 0.03% (2)
and scaling - 0.03% (2)
the increased - 0.03% (2)
start instance - 0.03% (2)
automation with - 0.03% (2)
compute engine - 0.03% (2)
to manage - 0.03% (2)
the cloud - 0.03% (2)
pretty straight - 0.03% (2)
now we - 0.03% (2)
part 1 - 0.03% (2)
session zone - 0.03% (2)
gcloud config - 0.03% (2)
set compute/zone - 0.03% (2)
1 etcd, - 0.03% (2)
worker as - 0.03% (2)
a request - 0.03% (2)
are some - 0.03% (2)
health checks - 0.03% (2)
simple to - 0.03% (2)
time to - 0.03% (2)
means that - 0.03% (2)
of all - 0.03% (2)
trust a - 0.03% (2)
not have - 0.03% (2)
to generate - 0.03% (2)
a local - 0.03% (2)
is very - 0.03% (2)
relic browser - 0.03% (2)
authentication headers - 0.03% (2)
dependent on - 0.03% (2)
the raw - 0.03% (2)
validating responses - 0.03% (2)
the check - 0.03% (2)
.post("/cm/login") .headers(headers_14) - 0.03% (2)
.headers(headers_14) .body(rawfilebody("test_user_creds.txt")) - 0.03% (2)
using a - 0.03% (2)
subsequent requests - 0.03% (2)
event push - 0.03% (2)
we use - 0.03% (2)
projects data - 0.03% (2)
able to - 0.03% (2)
set of - 0.03% (2)
questions and - 0.03% (2)
sufficing the - 0.03% (2)
of application - 0.03% (2)
some questions - 0.03% (2)
myself now: - 0.03% (2)
our deployment - 0.03% (2)
scaling and - 0.03% (2)
deployment more - 0.03% (2)
the cluster - 0.03% (2)
easy to - 0.03% (2)
deploy an - 0.03% (2)
javascript errors - 0.03% (2)
dependence on - 0.03% (2)
new relic - 0.03% (2)
analytics with - 0.03% (2)
custom event - 0.03% (2)
have the - 0.03% (2)
real user - 0.03% (2)
performance monitoring - 0.03% (2)
real time - 0.03% (2)
to include - 0.03% (2)
// providing - 0.03% (2)
looks good - 0.03% (2)
answer_ids vector - 0.03% (2)
the vector - 0.03% (2)
all id - 0.03% (2)
values from - 0.03% (2)
body and - 0.03% (2)
puts them - 0.03% (2)
vector accessible - 0.03% (2)
via "${answer_ids}" - 0.03% (2)
or sessions.get("answer_ids") - 0.03% (2)
all vaules - 0.03% (2)
.exec(session => - 0.03% (2)
the json - 0.03% (2)
maybeid = - 0.03% (2)
session.get("answer_ids").asoption[string] println(maybeid.getorelse("no - 0.03% (2)
ids found")) - 0.03% (2)
val maybeid - 0.03% (2)
= session.get("answer_ids").asoption[string] - 0.03% (2)
println(maybeid.getorelse("no ids - 0.03% (2)
json response - 0.03% (2)
the foreach - 0.03% (2)
– we - 0.03% (2)
be seen - 0.03% (2)
vector was - 0.03% (2)
trying to - 0.03% (2)
the saved - 0.03% (2)
--> - 0.03% (2)
key value - 0.03% (2)
string arg - 0.03% (2)
.headers(authheader("${auth_token}")) // - 0.03% (2)
providing the - 0.03% (2)
saved key - 0.03% (2)
value as - 0.03% (2)
a string - 0.03% (2)
that all - 0.03% (2)
logging all - 0.03% (2)
/> - 0.03% (2)
the users - 0.03% (2)
/> + + ">and responses --> - 0.06% (4)
- 0.06% (4)
documentroot /var/www/sites/static directoryindex - 0.06% (4)
the world on - 0.06% (4)
+"bearer">map("authorization" -> "bearer - 0.06% (4)
val answer_chain = - 0.06% (4)
- 0.06% (4)
vagrant ssh c1 - 0.04% (3)
the google cloud - 0.04% (3)
it staff have - 0.04% (3)
scalable microservices with - 0.04% (3)
need to be - 0.04% (3)
we can use - 0.04% (3)
the response body - 0.04% (3)
all http request - 0.04% (3)
shell # initate - 0.04% (3)
-> baseurl) } - 0.04% (3)
for internal site - 0.03% (2)
serveradmin webmaster@mwclearning.ninja documentroot - 0.03% (2)
cert.pem); do ${prog_echo} - 0.03% (2)
customlog /var/log/httpd/static_access.log combined - 0.03% (2)
- 0.03% (2)
.. on $(hostname)" - 0.03% (2)
/var/log/httpd/static_error.log loglevel warn - 0.03% (2)
sslinsecurerenegotiation off sslcertificatekeyfile - 0.03% (2)
"current expiries: " - 0.03% (2)
img.mwclearning.com serveralias img.mwclearning.ninja - 0.03% (2)
serveradmin webmaster@mwclearning.com documentroot - 0.03% (2)
/var/log/httpd/static_ssl_error.log loglevel warn - 0.03% (2)
customlog /var/log/httpd/static_ssl_access.log combined - 0.03% (2)
sslengine on sslciphersuite - 0.03% (2)
warn customlog /var/log/httpd/static_access.log - 0.03% (2)
ecdh+aesgcm:dh+aesgcm:ecdh+aes256:dh+aes256:ecdh+aes128:dh+aes:ecdh+3des:dh+3des:rsa+aesgcm:rsa+aes:rsa+3des:!anull:!md5:!dss:!rc4 sslhonorcipherorder on - 0.03% (2)
/etc/letsencrypt/live/static.mwclearning.com/privkey.pem sslcertificatefile /etc/letsencrypt/live/static.mwclearning.com/cert.pem - 0.03% (2)
errorlog /var/log/httpd/static_error.log loglevel - 0.03% (2)
${prog_letsencrypt} renew --agree-tos - 0.03% (2)
img.mwclearning.com serveralias registry.mwclearning.ninja - 0.03% (2)
# <<-- dummy - 0.03% (2)
alias for internal - 0.03% (2)
site serveradmin webmaster@mwclearning.ninja - 0.03% (2)
<<-- dummy alias - 0.03% (2)
setup(scn.inject(atonceusers(10))).protocols(httpprotocol).assertions( global.responsetime.max.lessthan(2000), global.successfulrequests.percent.greaterthan(99)) - 0.03% (2)
serveralias registry.mwclearning.ninja # - 0.03% (2)
.headers(authheader("${auth_token}")).check(status.is(200), jsonpath("$..status"))) val - 0.03% (2)
//... now that - 0.03% (2)
post auth deps - 0.03% (2)
of get for - 0.03% (2)
//... another bunch - 0.03% (2)
etc .exec(http("login-with-creds") .post("/cm/login") - 0.03% (2)
for js css - 0.03% (2)
of get requests - 0.03% (2)
.exec(http("get_web_app_deps") //... bunch - 0.03% (2)
baseurl) } val - 0.03% (2)
ids we can - 0.03% (2)
rv:43.0) gecko/20100101 firefox/43.0") - 0.03% (2)
os x 10.11; - 0.03% (2)
(macintosh; intel mac - 0.03% (2)
deflate") .acceptlanguageheader("en-us,en;q=0.5") .useragentheader("mozilla/5.0 - 0.03% (2)
text/plain, */*") .acceptencodingheader("gzip, - 0.03% (2)
http .baseurl(baseurl) .acceptheader("application/json, - 0.03% (2)
val httpprotocol = - 0.03% (2)
{ val baseurl="https://blah.mwclearning.com" - 0.03% (2)
basiclogin extends simulation - 0.03% (2)
vector full of - 0.03% (2)
request those resources - 0.03% (2)
servername - 0.03% (2)
.exec(http("select") .get("${computerurl}")) .pause(3) - 0.03% (2)
static web site - 0.03% (2)
/var/www/sites/static -d static.mwclearning.com - 0.03% (2)
certonly --webroot -w - 0.03% (2)
-d static.mwclearning.com -d - 0.03% (2)
--webroot -w /var/www/sites/static - 0.03% (2)
yum install letsencrypt - 0.03% (2)
add user, logout) - 0.03% (2)
all the power - 0.03% (2)
.check(css("a:contains('${searchcomputername}')", "href").saveas("computerurl"))) .pause(2) - 0.03% (2)
} //... finally - 0.03% (2)
.feed(feeder) .exec(http("search") .get("/computers?f=${searchcriterion}") - 0.03% (2)
exec(http("home") .get("/")) .pause(1) - 0.03% (2)
val search = - 0.03% (2)
feeder = csv("search.csv").random - 0.03% (2)
search { val - 0.03% (2)
working through the - 0.03% (2)
posted on may - 0.03% (2)
params and assertions - 0.03% (2)
set the simulation - 0.03% (2)
"$le_status" != 0 - 0.03% (2)
0 ]; then - 0.03% (2)
]; then ${prog_echo} - 0.03% (2)
return store_token try: - 0.03% (2)
true notes = - 0.03% (2)
notesmetadataresultspec(includetitle=true) spec.includetitle = - 0.03% (2)
false spec = - 0.03% (2)
notefilter() filter.ascending = - 0.03% (2)
client.get_note_store() filter = - 0.03% (2)
note_list=[] note_store = - 0.03% (2)
client def listnotes(client): - 0.03% (2)
= evernoteclient(token=oauthflow(),sandbox=false) return - 0.03% (2)
e: print(e) client - 0.03% (2)
except exception as - 0.03% (2)
user = userstore.getuser() - 0.03% (2)
userstore = client.get_user_store() - 0.03% (2)
client = evernoteclient(token=gettoken(),sandbox=false) - 0.03% (2)
= pickle.load(input) store_token=clientt.oauth_token_list - 0.03% (2)
100, spec) for - 0.03% (2)
as input: clientt - 0.03% (2)
with open(local_token, 'rb') - 0.03% (2)
store_token="" if os.path.isfile(local_token): - 0.03% (2)
pickle.highest_protocol) def gettoken(): - 0.03% (2)
storetoken(authtoken(auth_token)) return auth_token - 0.03% (2)
= parse_query_string(authurl) auth_token=client.get_access_token(request_token['oauth_token'],request_token['oauth_token_secret'],vals['oauth_verifier']) - 0.03% (2)
= raw_input() vals - 0.03% (2)
login here:" authurl - 0.03% (2)
the url after - 0.03% (2)
client.get_authorize_url(request_token)) print "paste - 0.03% (2)
browser: " + - 0.03% (2)
expired, load in - 0.03% (2)
client.get_request_token('https://assetowl.com') print(request_token) print("token - 0.03% (2)
note_store.findnotesmetadata(client.token, filter, 0, - 0.03% (2)
note in notes.notes: - 0.03% (2)
= data.get('consumer_secret'), sandbox=false - 0.03% (2)
"wb") as wer: - 0.03% (2)
string parameters into - 0.03% (2)
to turn query - 0.03% (2)
# helper function - 0.03% (2)
true return true - 0.03% (2)
os.path.exists(output_dir): os.makedirs(output_dir) return - 0.03% (2)
prepdest(): if not - 0.03% (2)
json, os, pickle, - 0.03% (2)
__name__ == '__main__': - 0.03% (2)
downloadresources(web_prefix, listnotes(client)) if - 0.03% (2)
web_prefix = user_store.getpublicuserinfo(user_store.getuser().username).webapiurlprefix - 0.03% (2)
= authenticate() user_store=client.get_user_store() - 0.03% (2)
if prepdest(): client - 0.03% (2)
wer.write(content) def main(): - 0.03% (2)
with open(os.path.join(output_dir, res[0]), - 0.03% (2)
for resource in - 0.03% (2)
"post", headers={'auth': dev_token}) - 0.03% (2)
content) = h.request(res_url, - 0.03% (2)
= httplib2.http(".cache") (resp_headers, - 0.03% (2)
+ res[0]) h - 0.03% (2)
" + output_dir - 0.03% (2)
" + res_url - 0.03% (2)
(web_prefix, res[1]) print("downloading: - 0.03% (2)
= "%sres/%s" % - 0.03% (2)
in res_array: res_url - 0.03% (2)
res_array): for res - 0.03% (2)
note_list def downloadresources(web_prefix, - 0.03% (2)
note_list.append([resource.attributes.filename, resource.guid]) return - 0.03% (2)
false, true, false).resources: - 0.03% (2)
note_store.getnote(client.token, note.guid, false, - 0.03% (2)
) request_token = - 0.03% (2)
= data.get('consumer_key'), consumer_secret - 0.03% (2)
automated renewal failed: - 0.03% (2)
sudo docker build - 0.03% (2)
import json, os, - 0.03% (2)
registry.mwc.ninja:5000/syseng/ao-letsencrypt-aws | awk - 0.03% (2)
-a | grep - 0.03% (2)
$(sudo docker ps - 0.03% (2)
| awk '{print - 0.03% (2)
| grep registry.mwc.ninja:5000/syseng/ao-letsencrypt-aws - 0.03% (2)
docker ps -a - 0.03% (2)
docker rm $(sudo - 0.03% (2)
docker run registry.mwc.ninja:5000/syseng/ao-letsencrypt-aws - 0.03% (2)
shell sudo docker - 0.03% (2)
docker build -t - 0.03% (2)
sudo docker push - 0.03% (2)
-t registry.mwc.ninja:5000/syseng/ao-letsencrypt-aws . - 0.03% (2)
maintainer mark env - 0.03% (2)
import evernote.edam.userstore.constants as - 0.03% (2)
\"acme_account_key\":\"s3://config-bucket-abc123/config_items/private_key.pem\"}" env aws_access_key_id="" - 0.03% (2)
mark env letsencrypt_aws_config="{\"domains\": - 0.03% (2)
from alexgaynor/letsencrypt-aws:latest maintainer - 0.03% (2)
this is a - 0.03% (2)
provisioning and auto-renewing - 0.03% (2)
failed: cat /var/log/letsencrypt/renew.log - 0.03% (2)
${prog_echo} automated renewal - 0.03% (2)
at the http - 0.03% (2)
- 0.03% (2)
certonly --webroot .. - 0.03% (2)
cert.pem); do echo - 0.03% (2)
"new expiries: " - 0.03% (2)
cat /var/log/letsencrypt/renew.log exit - 0.03% (2)
pickle, httplib2, io - 0.03% (2)
evernote.api.client import evernoteclient - 0.03% (2)
= evernoteclient( consumer_key - 0.03% (2)
len(uargs) == 1: - 0.03% (2)
= json.load(data_file) client - 0.03% (2)
as data_file: data - 0.03% (2)
oauthflow(): with open(credentials_file) - 0.03% (2)
authenticate(): def storetoken(auth_token): - 0.03% (2)
= token_list def - 0.03% (2)
__init__(self, token_list): self.oauth_token_list - 0.03% (2)
class authtoken(object): def - 0.03% (2)
value return vals - 0.03% (2)
1) vals[key] = - 0.03% (2)
value = pair.split('=', - 0.03% (2)
in uargs[1].split('&'): key, - 0.03% (2)
url') for pair - 0.03% (2)
raise exception('invalid authorization - 0.03% (2)
= authorize_url.split('?') vals - 0.03% (2)
from evernote.edam.notestore.ttypes import - 0.03% (2)
def parse_query_string(authorize_url): uargs - 0.03% (2)
# source: https://gist.github.com/inkedmn - 0.03% (2)
parameters into a - 0.03% (2)
turn query string - 0.03% (2)
helper function to - 0.03% (2)
return true # - 0.03% (2)
os.makedirs(output_dir) return true - 0.03% (2)
if not os.path.exists(output_dir): - 0.03% (2)
output_dir=str(date.today())+"_evernote_backup" def prepdest(): - 0.03% (2)
- 0.03% (2)
api key from - 0.03% (2)
install evernote # - 0.03% (2)
# pre-reqs: pip - 0.03% (2)
datetime import date - 0.03% (2)
import io.gatling.jdbc.predef._ class - 0.03% (2)
itops random projects - 0.03% (2)
the http layer - 0.03% (2)
name: "tls-certs" secret: - 0.03% (2)
create -f ./services/monolith.yaml - 0.03% (2)
key: "proxy.conf" path: - 0.03% (2)
"nginx-proxy-conf" items: - - 0.03% (2)
"nginx-proxy-conf" configmap: name: - 0.03% (2)
"tls-certs" - name: - 0.03% (2)
"tls-certs" secret: secretname: - 0.03% (2)
volumes: - name: - 0.03% (2)
- key: "proxy.conf" - 0.03% (2)
name: "nginx-proxy-conf" items: - 0.03% (2)
name: "nginx-proxy-conf" configmap: - 0.03% (2)
secretname: "tls-certs" - - 0.03% (2)
to the correct - 0.03% (2)
"enabled" ports: - - 0.03% (2)
defined in the - 0.03% (2)
the configmap just - 0.03% (2)
secret generic tls-certs - 0.03% (2)
for all files - 0.03% (2)
describe configmap nginx-proxy-conf - 0.03% (2)
just created kubectl - 0.03% (2)
describe the configmap - 0.03% (2)
kubectl create configmap - 0.03% (2)
create a configmap - 0.03% (2)
created kubectl describe - 0.03% (2)
you have just - 0.03% (2)
# describe secrets - 0.03% (2)
app: "monolith" secure: - 0.03% (2)
protocol: "tcp" port: - 0.03% (2)
kubectl create secret - 0.03% (2)
selector: app: "monolith" - 0.03% (2)
# get pods - 0.03% (2)
get pods -l - 0.03% (2)
pods meeting service - 0.03% (2)
10.140.0.2 104.155.208.48 running - 0.03% (2)
asia-east1-c n1-standard-1 10.140.0.4 - 0.03% (2)
zone machine_type preemptible - 0.03% (2)
list external ip - 0.03% (2)
443 nodeport: 31000 - 0.03% (2)
port: 443 targetport: - 0.03% (2)
- protocol: "tcp" - 0.03% (2)
secure: "enabled" ports: - 0.03% (2)
on gce# create - 0.03% (2)
443 targetport: 443 - 0.03% (2)
n1-standard-1 10.140.0.2 104.155.208.48 - 0.03% (2)
n1-standard-1 10.140.0.3 104.199.150.12 - 0.03% (2)
n1-standard-1 10.140.0.4 104.199.198.133 - 0.03% (2)
preemptible internal_ip external_ip - 0.03% (2)
name zone machine_type - 0.03% (2)
compute instances list - 0.03% (2)
external ip of - 0.03% (2)
firewall-rules create allow-monolith-nodeport - 0.03% (2)
nodes gcloud compute - 0.03% (2)
on all cluster - 0.03% (2)
the nodeport port - 0.03% (2)
nodeport: 31000 type: - 0.03% (2)
generic tls-certs --from-file=tls/ - 0.03% (2)
files in dir - 0.03% (2)
coreos posted on - 0.03% (2)
communications systems analysis - 0.03% (2)
in the course - 0.03% (2)
on october 20, - 0.03% (2)
a number of - 0.03% (2)
it does not - 0.03% (2)
running kubectl apply - 0.03% (2)
are used to - 0.03% (2)
be accomplished with - 0.03% (2)
database technology rss - 0.03% (2)
technologies and o/s - 0.03% (2)
application development computer - 0.03% (2)
and design internet - 0.03% (2)
of programming data - 0.03% (2)
creating a new - 0.03% (2)
research methods foundations - 0.03% (2)
grid computing it - 0.03% (2)
sys. intelligent systems - 0.03% (2)
computation for intell. - 0.03% (2)
dos research natural - 0.03% (2)
reading unit – - 0.03% (2)
security network security - 0.03% (2)
uni advanced network - 0.03% (2)
to parallel programming - 0.03% (2)
– scala introduction - 0.03% (2)
devops functional programming - 0.03% (2)
kubernetes intro to - 0.03% (2)
and run our - 0.03% (2)
pretty straight forward - 0.03% (2)
secrets for all - 0.03% (2)
clusters create k0 - 0.03% (2)
gce # create - 0.03% (2)
nginx --port 80 - 0.03% (2)
kubectl get services - 0.03% (2)
# list services - 0.03% (2)
80 --type loadbalancer - 0.03% (2)
deployment nginx --port - 0.03% (2)
balancer provisioned by - 0.03% (2)
via a load - 0.03% (2)
# expose nginx - 0.03% (2)
# list pods - 0.03% (2)
run nginx --image=nginx:1.10.0 - 0.03% (2)
# launch a - 0.03% (2)
how do we - 0.03% (2)
config set compute/zone - 0.03% (2)
on all ports - 0.03% (2)
make is open - 0.03% (2)
instance like this - 0.03% (2)
that starting an - 0.03% (2)
ubuntu # note - 0.03% (2)
# set session - 0.03% (2)
is open to - 0.03% (2)
like this make - 0.03% (2)
starting an instance - 0.03% (2)
# note that - 0.03% (2)
compute ssh ubuntu - 0.03% (2)
instances create ubuntu - 0.03% (2)
meeting service label - 0.03% (2)
that needs to - 0.03% (2)
can be seen - 0.03% (2)
was not very - 0.03% (2)
as a string - 0.03% (2)
saved key value - 0.03% (2)
// providing the - 0.03% (2)
.exec(http("login-with-creds") .post("/cm/login") .headers(headers_14) - 0.03% (2)
.post("/cm/login") .headers(headers_14) .body(rawfilebody("test_user_creds.txt")) - 0.03% (2)
to a request - 0.03% (2)
on the previous - 0.03% (2)
dealing with authentication - 0.03% (2)
i could not - 0.03% (2)
end user experience. - 0.03% (2)
the javascript error - 0.03% (2)
new relic browser - 0.03% (2)
providing the saved - 0.03% (2)
real time alerting - 0.03% (2)
posted on september - 0.03% (2)
scaling and deployment - 0.03% (2)
monitoring, scaling and - 0.03% (2)
questions and sufficing - 0.03% (2)
first set of - 0.03% (2)
for removing the - 0.03% (2)
need to monitor - 0.03% (2)
ask myself now: - 0.03% (2)
some questions i - 0.03% (2)
on top of - 0.03% (2)
create an ingress - 0.03% (2)
"/information-requests") .headers(authheader("${auth_token}")) // - 0.03% (2)
key value as - 0.03% (2)
for redis master, - 0.03% (2)
//prints all vaules - 0.03% (2)
http response body - 0.03% (2)
session.get("answer_ids").asoption[string] println(maybeid.getorelse("no ids - 0.03% (2)
val maybeid = - 0.03% (2)
vector accessible via - 0.03% (2)
them in a - 0.03% (2)
body and puts - 0.03% (2)
from the response - 0.03% (2)
all id values - 0.03% (2)
ids found")) session - 0.03% (2)
= session.get("answer_ids").asoption[string] println(maybeid.getorelse("no - 0.03% (2)
{ val maybeid - 0.03% (2)
in the answer_ids - 0.03% (2)
.headers(authheader("${auth_token}")).check(status.is(200), jsonpath("$..id").findall.saveas("answer_ids")) //.... - 0.03% (2)
a string arg - 0.03% (2)
data mining courses - 0.03% (2)
accessible via "${answer_ids}" - 0.03% (2)
in a vector - 0.03% (2)
and puts them - 0.03% (2)
id values from - 0.03% (2)
the vector was - 0.03% (2)
of the users - 0.03% (2)
">name="io.gatling.http.response" level="debug" /> - 0.03% (2)
">name="io.gatling.http.ahc" level="debug" /> - 0.03% (2)
only failed http - 0.03% (2)
+