In this article, I will walk you through a better way to manage certificates in Kubernetes using Ambassador Edge Stack’s automatic TLS with ACME.
This tutorial requires you have the following:
The Ambassador Edge Stack has simple and easy built-in support for automatically using ACME to create and renew TLS certificates; configured by the Host resource. However, it only supports ACME’s http-01 challenge; if you require more flexible certificate management (such as using ACME’s dns-01 challenge, or using a non-ACME certificate source), the Ambassador Edge stack also supports using external certificate management tools.
One such tool is Jetstack’s cert-manager, which is a general-purpose tool for managing certificates in Kubernetes. Cert-manager will automatically create and renew TLS certificates and store them as Kubernetes secrets for easy use in a cluster. The Ambassador Edge Stack will automatically watch for secret changes and reload certificates upon renewal.
Essentially, we can deploy Cert-Manager to manage certificates in Kubernetes for us. Ambassador only supports HTTP-01 challenge but it’s possible to perform DNS-01 challenge using Cert-Manager.
Note: We use GoDaddy domain names and it is not a supported DNS Provider (see list of supported providers). There are several Cert-Manager Godaddy Webhook implementations online but they don’t seem to be well maintained so I decided to stick with HTTP-01 challenge.
For tutorial, I will be using an arbitrary email my-email@gmail.com and Let’s Encrypt to Issue a certificate for an arbitrary domain name dev.mydomain.com.
Let’s start by installing the Cert-Manager tool that will manage our certificates.
# Install Custom Resource Definitions and Cert-Manager
kubectl apply -f https://github.com/jetstack/cert-manager/releases/latest/download/cert-manager.yaml
Note: You can also install Cert-Manager with Helm (see here)
An Issuer or ClusterIssuer identifies which Certificate Authority cert-manager will use to issue a certificate. Issuer is a namespaced resource allowing you to use different CAs in each namespace, a ClusterIssuer is used to issue certificates in any namespace. Configuration depends on which ACME challenge you are using.
Once the Cert-manager deployments are completed, you can create a ClusterIssuer (global) or an Issuer (namespaced) resource. In this case, we are using Let’s Encrypt.
---
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
# Replace this email address with your own.
# Let's Encrypt will use this to contact you about expiring
# certificates, and issues related to your account.
email: my-email@gmail.com
# ACME URL, you can use the URL for Staging environment to Issue untrusted certificates
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# Secret resource that will be used to store the account's private key.
name: issuer-account-private-key
solvers:
# Define the solver to perform HTTP-01 challenge
- http01:
ingress:
class: nginx
selector: {}
A Certificate is a namespaced resource that specifies fields that are used to generated certificate signing requests which are then fulfilled by the issuer type you have referenced. Certificates specify which issuer they want to obtain the certificate from by specifying the certificate.spec.issuerRef field.
Once the Issuer is ready, you can create a Certificate resource which will send a request to issue a new certificate.
---
apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
name: dev.mydomain.com
# Cert-manager will put the resulting Secret in the same Kubernetes
# namespace as the Certificate. You should create the certificate in
# whichever namespace you want to configure a Host.
spec:
secretName: dev.mydomain.com
issuerRef:
# Name of ClusterIssuer
name: letsencrypt-prod
kind: ClusterIssuer
dnsNames:
- dev.mydomain.com
After applying this template, you should see the following events:
$ kubectl get events -n default # The namespace in which you created your Certificate resource
14m Normal cert-manager.io certificaterequest/dev.mydomain.com-qrfxs Certificate request has been approved by cert-manager.io
14m Normal Issuing certificate/dev.mydomain.com Issuing certificate as Secret does not exist
14m Normal Generated certificate/dev.mydomain.com Stored new private key in temporary Secret resource "dev.mydomain.com-lrdk6"
14m Normal Requested certificate/dev.mydomain.com Created new CertificateRequest resource "dev.mydomain.com-qrfxs"
14m Normal Created order/dev.mydomain.com-qrfxs-820390478 Created Challenge resource "dev.mydomain.com-qrfxs-820390478-3681158932" for domain "dev.mydomain.com"
<unknown> Normal Scheduled pod/cm-acme-http-solver-fbhcs Successfully assigned default/cm-acme-http-solver-fbhcs to the-name-of-some-node-1
14m Normal Presented challenge/dev.mydomain.com-qrfxs-820390478-3681158932 Presented challenge using HTTP-01 challenge mechanism
14m Normal Started challenge/dev.mydomain.com-qrfxs-820390478-3681158932 Challenge scheduled for processing
14m Normal Pulling pod/cm-acme-http-solver-fbhcs Pulling image "quay.io/jetstack/cert-manager-acmesolver:v1.3.1"
13m Normal Pulled pod/cm-acme-http-solver-fbhcs Successfully pulled image "quay.io/jetstack/cert-manager-acmesolver:v1.3.1"
13m Normal Started pod/cm-acme-http-solver-fbhcs Started container acmesolver
13m Normal Created pod/cm-acme-http-solver-fbhcs Created container acmesolver
At this point, Cert-manager will have created a temporary pod named cm-acme-http-solver-xxxx
but no certificate has been issued. You will need to create a Mapping resource to allow Ambassador to reach the http-01 challenge solver via http://dev.mydomain.com/.well-known/acme-challenge/<some-token>
.
---
apiVersion: getambassador.io/v2
kind: Mapping
metadata:
name: acme-challenge-mapping
spec:
prefix: /.well-known/acme-challenge/
rewrite: ""
service: acme-challenge-service
---
apiVersion: v1
kind: Service
metadata:
name: acme-challenge-service
spec:
ports:
- port: 80
targetPort: 8089
selector:
acme.cert-manager.io/http01-solver: "true"
After applying the template, you will need to wait a several minutes (about 10 minutes) before cert-manager retries the challenge and issues a certificate. You should see the following events:
$ kubectl get events -n default # The namespace in which you created your Certificate resource
6m38s Normal Killing pod/cm-acme-http-solver-fbhcs Stopping container acmesolver
6m38s Normal DomainVerified challenge/dev.mydomain.com-qrfxs-820390478-3681158932 Domain "dev.mydomain.com" verified with "HTTP-01" validation
6m37s Normal Complete order/dev.mydomain.com-qrfxs-820390478 Order completed successfully
6m37s Normal Issuing certificate/dev.mydomain.com The certificate has been successfully issued
6m37s Normal CertificateIssued certificaterequest/dev.mydomain.com-qrfxs Certificate fetched from issuer successfully
After the certificate was successfully issued, there should be a TLS secret called dev.mydomain.com
(name is defined by secretName in the Certificate resource). Then, you can create a Host resource. It will register your ACME account, read the certificate from the TLS secret and use that to terminate TLS on your domain.
---
apiVersion: getambassador.io/v2
kind: Host
metadata:
name: dev.mydomain.com
namespace: default
spec:
acmeProvider:
authority: 'https://acme-v02.api.letsencrypt.org/directory'
email: my-email@gmail.com
ambassadorId:
- default
hostname: dev.mydomain.com
selector:
matchLabels:
hostname: dev.mydomain.com
tlsSecret:
name: dev.mydomain.com # The secretName defined in your Certificate resource
You should see the following events:
$ kubectl get events -n default # The namespace in which you created your Host resource
10s Normal Pending host/dev.mydomain.com waiting for Host DefaultsFilled change to be reflected in snapshot
8s Normal Pending host/dev.mydomain.com creating private key Secret
8s Normal Pending host/dev.mydomain.com waiting for private key Secret creation to be reflected in snapshot
6s Normal Pending host/dev.mydomain.com waiting for Host status change to be reflected in snapshot
4s Normal Pending host/dev.mydomain.com registering ACME account
3s Normal Pending host/dev.mydomain.com waiting for Host ACME account registration change to be reflected in snapshot
3s Normal Pending host/dev.mydomain.com ACME account registered
1s Normal Pending host/dev.mydomain.com waiting for TLS Secret update to be reflected in snapshot
1s Normal Pending host/dev.mydomain.com updating TLS Secret
0s Normal Ready host/dev.mydomain.com Host with ACME-provisioned TLS certificate marked Ready
Ambassador Edge Stack automatically enables TLS termination/HTTPs and you can easily configure it to completely manage TLS by requesting a certificate from a Certificate Authority(CA) instead of generating and managing certificates yourself!
🐢
I remember during my System Administrator Devops internship, I had to perform some operations on some background service. At this point, I still had no idea what services were. I had to use commands like:
sudo service <service-name> <command>
Occasionally, I would see people online use init.d
instead, which also worked.
sudo /etc/init.d/<service-name> <command>
But why are there two commands that do exactly the same thing? Sadly, this question never crossed my mind. I was happy as long as the commands worked. That is, until I started working on Fedora CoreOS for Kubernetes and this happened:
$ sudo service kubelet <command>
sudo: service: command not found
service
is not a command?! After looking for an answer on Google, I found that the command was specific to certain Linux distributions and the solution was to use:
sudo systemctl <command> <service-name>
What!? A third command to manage services? Yup. In fact, some Linux distributions (distros) have their own command to manage services but I’m not going to go into that. In this article, I will only to talk about the init daemons Init and Systemd that use the commands service
and systemctl
respectively. But first, we need to understand what an init daemon is.
The init daemon is the first process executed by the Linux Kernel and its process ID (PID) is always 1. Its purpose is to initialize, manage and track system services and daemons. In other words, the init daemon is the parent of all processes on the system.
Init (also known as System V init, or SysVinit) is an init daemon, created in the 1980s, that defines six run-levels (system states) and maps all system services to these run-levels. This allows all services (defined as scripts) to be started in a pre-defined sequence. The next script is executed only if the current script in the sequence is executed or timed out if it gets stucked. In addition to unexpected wait during execution timeouts, starting services serially makes the system initialization process inefficient and relatively slow.
To create a service, you will need to write a script and store it in /etc/init.d
directory. You would write a service script /etc/init.d/myService
that looks something like this:
#!/bin/bash
# chkconfig: 2345 20 80
# description: Description comes here....
# Source function library.
. /etc/init.d/functions
start() {
# TODO: code to start app comes here
}
stop() {
# TODO: code to stop app comes here
}
case "$1" in
start)
start
;;
stop)
stop
;;
restart)
stop
start
;;
status)
# TODO: code to check status of app comes here
;;
*)
echo "Usage: $0 {start|stop|restart|status}"
esac
exit 0
You can read about chkconfig in the man page. Essentially, it defines in which run-level your service should be run. Once you have your script, you can use the service
command to start, stop, and restart your service.
Systemd (system daemon) is an init daemon used by modern systems and starts system services in parallel which remove unnecessary delays and speeds up the initialization process. What do I mean by parallel? Systemd uses Unit Dependencies to define whether a service wants/requires other services to run successfully, and Unit Order to define whether a service needs other services to be started before/after it.
To create a service, you will need to write a .service
file stored in the /etc/systemd/system
directory. You would write a file /etc/systemd/system/myService.service
that looks something like this:
[Unit]
Description=Some Description
Requires=syslog.target
After=syslog.target
[Service]
ExecStart=/usr/sbin/<command-to-start>
ExecStop=/usr/sbin/<command-to-stop>
[Install]
WantedBy=multi-user.target
I will discuss more about how to create a service with Systemd in another article. Once you have your service file, you can start, stop and restart your service using the systemctl
command.
Init and Systemd are both init daemons but it is better to use the latter since it is commonly used in recent Linux Distros. Init uses service
whereas Systemd uses systemctl
to manage Linux services.
🐢
API stands for Application Programming Interface. APIs allow applications to talk with each other via a common communication method. There are a lot of different API architectural styles such as REST, SOAP, GraphQL and gRPC. With most APIs, there’s a request followed by a response.
For example, a restaurant might have an application that would make an API request to their server and obtain a list of menu items in the response, then display it for their users. A lot applications out there provide public APIs that you can be use in your personal projects such as YouTube Data API and Google Map API.
Unlike APIs, Webhook is simply an HTTP POST request that is triggered automatically when an event occurs. Basically, webhooks are “user-defined callbacks”.
For example, an application could provide a webhook that will get triggered by another application when new data is received (callback) instead of sending requests at fixed interval to fetch new data (polling).
Slack provides a complete list of REST API methods available to bots. We are going to use the users.list method to list available users and chat.postMessage method to send a message to a user or channel.
1. Navigate to the Custom Integrations page of your Workspace https://<your-workspace-name>.slack.com/apps/manage/custom-integrations
and select Bots
2. Choose a name and add the bot integration.
3. Save the API Token, we will use it later in Slack API requests for authentication.
4. Let’s try out the users.list method using an API client like Postman and click on code to generate code:
# slack-api.py
import requests, json
base_url = "https://slack.com/api"
payload={}
headers = {
'Content-Type': 'application/x-www-form-urlencoded',
'Authorization': 'Bearer [your Slack Bot API Token]'
}
# Make GET request and receive response
response = requests.request("GET", f"{base_url}/users.list", headers=headers, data=payload)
# Convert response to a Dict object
response_json = json.loads(response.text)
# Find user by username
username = 'yueh.liu'
user = next((member for member in response_json['members'] if member['name'] == username), None)
# Make sure the user exists
if not user:
raise Exception(f'User [{username}] was not found')
# Save the user_id
user_id = user['id']
5. Now that we have the User ID, we can try sending a message to that user! We can repeat the previous step with the chat.postMessage method. Make sure to change the request method to POST
.
You should receive a message like this on Slack
The updated code should look something like this:
# slack-api.py
import requests, json
base_url = "https://slack.com/api"
payload={}
headers = {
'Content-Type': 'application/x-www-form-urlencoded',
'Authorization': 'Bearer [your Slack Bot API Token]'
}
# Make GET request and receive response
response = requests.request("GET", f"{base_url}/users.list", headers=headers, data=payload)
# Convert response to a Dict object
response_json = json.loads(response.text)
# Find user by username
username = 'yueh.liu'
user = next((member for member in response_json['members'] if member['name'] == username), None)
# Make sure the user exists
if not user:
raise Exception(f'User [{username}] was not found')
# Save the user_id
user_id = user['id']
# Set the parameters such as the channel ID (user ID in our case), username for the bot, text message, icon url, etc
# You can also send a JSON payload instead of query parameters, but you would need to change the 'Content-Type' to 'application/json' in the headers
params = f"channel={user_id}&text=Hello Yueh!&username=ua-bot&icon_url=https://some-url-link.jpg"
# Make POST request and receive response
response = requests.request("POST", f"{base_url}/chat.postMessage?{params}", headers=headers, data=payload)
print(response.text)
Incoming Webhooks are a simple way to post messages from external sources into Slack. They make use of normal HTTP requests with a JSON payload, which includes the message and a few other optional details described later.
For this example, we are going to create a Web Server and integrate an Incoming Webhook. We will trigger the webhook automatically to send a message to a user on Slack whenever the server receives a message.
1. Navigate to the Custom Integrations page of your Workspace https://<your-workspace-name>.slack.com/apps/manage/custom-integrations
and select Incoming WebHooks
2. Choose a channel (or user) to post your messages and add the webhook
You should see a message like this on Slack
4. Since webhooks work best as callback from a server, let’s write a simple HTTP server that runs on localhost and port 3000. The web server will receive a message on /message
path and read the message content from the payload.
# server.py
from http.server import BaseHTTPRequestHandler, HTTPServer
import json
# Define a custom Request Handler
class CustomHandler(BaseHTTPRequestHandler):
def set_response(self, code, byte_message):
self.send_response(code)
self.send_header("Content-type", "text/plain")
self.end_headers()
self.wfile.write(byte_message)
def do_GET(self):
if self.path == "/":
self.set_response(200, "I'm alive!!!\n".encode())
self.wfile.write()
else:
self.send_error(404)
return
def do_POST(self):
if self.path == "/message":
# Get payload
content_length = int(self.headers["Content-Length"])
encoded_data = self.rfile.read(content_length)
data = json.loads(encoded_data.decode("utf-8"))
if not "message" in data and not data['message']:
self.send_error(400, "Bad Request", '"message" must be in the payload')
return
self.set_response(200, f"Received message: \"{data['message']}\"\n".encode())
else:
self.send_error(404)
return
# Initialize an HTTP server
port = 3000
address = ("", port)
server = HTTPServer(address, CustomHandler)
# Start your server
print(f"Starting Web server on localhost:{port}..")
server.serve_forever()
6. Now that the server is running, let’s integrate the webhook into the code!
# server.py
from http.server import BaseHTTPRequestHandler, HTTPServer
import json, requests
# Define a custom Request Handler
class CustomHandler(BaseHTTPRequestHandler):
def set_response(self, code, byte_message):
self.send_response(code)
self.send_header("Content-type", "text/plain")
self.end_headers()
self.wfile.write(byte_message)
def do_GET(self):
if self.path == "/":
self.set_response(200, "I'm alive!!!\n".encode())
self.wfile.write()
else:
self.send_error(404)
return
def do_POST(self):
if self.path == "/message":
# Get payload
content_length = int(self.headers["Content-Length"])
encoded_data = self.rfile.read(content_length)
data = json.loads(encoded_data.decode("utf-8"))
if not "message" in data and not data['message']:
self.send_error(400, "Bad Request", '"message" must be in the payload')
return
self.set_response(200, f"Received message: \"{data['message']}\"\n".encode())
# Trigger the Webhook (make POST request) and we can ignore the response and failure
try:
webhook_url = "[Your Slack Webhook Url]"
headers = { 'Content-Type': 'application/json' }
payload = "{ \"text\": \"Your server received the following message:\n\n" + data['message'] + "\" }"
requests.request("POST", webhook_url, headers=headers, data=payload)
except Exception:
pass
else:
self.send_error(404)
return
# Initialize an HTTP server
port = 3000
address = ("", port)
server = HTTPServer(address, CustomHandler)
# Start your server
print(f"Starting Web server on localhost:{port}..")
server.serve_forever()
7. Once updated, we can re-send the same message as earlier and you should receive a message like this on Slack:
An API is a communication method used by applications to talk with other applications. Webhook is a POST request that is triggered automatically when an event happens. Basically, APIs are request-based while webhooks are event-based.
🐢
I always thought the only difference was that CMD can be overriden and that they were mutually exclusive since Docker containers needed a starting process. In fact, they are not mutually exclusive and understanding the difference between them could be very useful when building Dockerfiles!
An ENTRYPOINT is used to configure a container to run as an executable and it has two forms:
The exec form (preferred):
ENTRYPOINT ["executable", "param1", "param2"]
Command line arguments provided to docker run <image>
will be appended after all elements of the array. For example, if you need to provide a third parameter to the above ENTRYPOINT, you can run docker run <image> param3
. Moreover, it is possible to override the ENTRYPOINT using docker run --entrypoint
. The exec form is parsed as a JSON array, which means that you must use double-quotes (“) around words not single-quotes (‘) and backslashes need to be escaped.
The shell form:
ENTRYPOINT command param1 param2
This form prevents any command line arguments to be provided to the ENTRYPOINT and will start the executable as a subcommand of /bin/sh -c
. The executable will not be run with process ID (PID) 1 and it will not pass Unix signals.
A CMD is used to provide defaults for an executing container. The defaults can be an executable, command and/or parameters. Unlike ENTRYPOINT, CMD has 3 forms:
The exec form (preferred):
CMD ["executable", "param1", "param2"]
Although it looks similar to the ENTRYPOINT exec form, command line arguments provided to docker run <image>
will override the default CMD defined in the Dockerfile.
The default arguments form (used with ENTRYPOINT):
CMD ["param1", "param2"]
This form is used when both ENTRYPOINT and CMD instructions are specified. ENTRYPOINT will define the executable and parameters to run, whereas CMD will define additional default parameters, overridable by command line arguments provided to docker run <image>
.
The shell form:
CMD command param1 param2
Similar to the exec form, command line arguments provided to docker run <image>
will override the default CMD defined in the Dockerfile. However, the shell form will invoke a command shell and allow normal shell processing such as variable substitution.
Both ENTRYPOINT and CMD instructions allow containers to run as executable but they are not mutually exclusive. If you need to override the default executable, then you might want to use CMD. If you would like your container to run the same executable every time, then you should consider using ENTRYPOINT with CMD.
The table below describes the behaviour of ENTRYPOINT with CMD:
dockerfile ENTRYPOINT | dockerfile CMD | docker run --entrypoint | docker run command | Actual command run |
---|---|---|---|---|
[exec-1] | [foo bar] | <not set> | <not set> | [exec-1 foo bar] |
[exec-1] | [foo bar] | [exec-2] | <not set> | [exec-2] |
[exec-1] | [foo bar] | <not set> | [zoo boo] | [exec-1 zoo boo] |
[exec-1] | [foo bar] | [exec-2] | [zoo boo] | [exec-2 zoo boo] |
🐢
This article is a documentation of what I learned when making my first Chrome extension YouTubeStopwatch. If you’re looking for a tutorial for starters, check out the official Getting Started Tutorial.
YouTubeStopwatch was created for a course on Human-Computer Interaction (HCI). The objective was to help users manage the amount of time they would like to spend on YouTube, and somehow incite them to quit YouTube without resorting to blocking the site.
The idea was to prompt the user for the desired time they want to spend on YouTube and start a countdown. Once the time is up, the user is asked whether they want to stay on YouTube or leave. If they choose to keep watching videos, they will be subject to some gradual graphical deterioration and slowly worsening their viewing experience.
So how did I get started? Well, the first thing I had to learn was how Chrome Extensions are structured.
src
├── manifest.json
├── popup.html
├── js
│ ├── background.js
│ ├── content.js
│ ├── jquery-3.4.1.min.js.js
│ ├── constants.js
│ └── ...
├── img
│ └── extension-icon.png
└── css
└── popup.css
The manifest.json
file is the first thing you need when creating an extension. It provides all the information about your extension to Google Chrome such as the name of your extension, the permissions needed, etc but we’ll get into that a bit later. Here is a minimal example:
{
"manifest_version": 2,
"version": "0.1",
"name": "My Extension",
"description": "This is my extension"
}
Background scripts are scripts that run in the background of your browser when you open Google Chrome. You can make the scripts persistent or not depending on your use case. I chose to use a persistent script. As long as Google Chrome is open, the script will be running. To define background scripts, I added a background section to the manifest file.
{
"manifest_version": 2,
"version": "0.1",
"name": "My Extension",
"description": "This is my extension",
"background": {
"scripts": [
"js/background.js"
],
"persistent": true
}
}
Note: It is now recommended to use non-persistent background scripts with Event Driven Background Scripts.
Below is an example of what my background script looked like.
// List to track all active YouTube tabs
var active_youtube_tabs = [];
// Create Main Event Listener
function initBackground() {
chrome.runtime.onMessage.addListener(function (msg, sender) {
var tabId = sender.tab ? sender.tab.id : null;
// If sender is youtube, add listener
if (msg.from === 'youtube' && typeof (msg.event) === 'undefined') {
active_youtube_tabs.indexOf(tabId) < 0 ? addListeners(tabId) : null;
}
// Handles event message
switch (msg.event) {
case "START_COUNTDOWN":
startCountdown();
break;
...
default:
break;
}
});
}
initBackground();
// Subscribes tab to active youtube tabs and adds listener to url changes
function addListeners(tabId) {
active_youtube_tabs.push(tabId);
// When a youtube tab is closed, remove tabId from active_youtube_tabs list
chrome.tabs.onRemoved.addListener(function (id) {
if (tabId === id) {
removeYoutubeTab(tabId);
}
});
// When the tab url changes, remove tabId from active_youtube_tabs if user is no longer on Youtube
chrome.tabs.onUpdated.addListener(function (id, changeInfo) {
if (tabId === id && changeInfo.status === 'complete') {
chrome.tabs.get(tabId, function (tab) {
if (tab.url.indexOf('youtube.com') < 0) {
removeYoutubeTab(tabId);
}
});
}
});
}
// Removes specific tab from active_youtube_tabs list
function removeYoutubeTab(tabId) {
var idx = active_youtube_tabs.indexOf(tabId);
active_youtube_tabs.splice(idx, 1);
}
When the script starts, a callback function is added with onMessage.addListener()
to handle events. Depending on the event received, a different action will be triggered. For example, the START_COUNTDOWN
event will start the countdown in the background script. The tabId is stored in a list to keep track of active youtube tabs if the sender is youtube. This is done using the Chrome Tabs API and we need to give permissions to our application in the manifest file.
I needed to use JQuery in the background script so I downloaded the jquery-3.4.1.min.js file, saved it in the js directory and specified the file as a background script. Here are the new changes to the manifest file:
{
"manifest_version": 2,
"version": "0.1",
"name": "My Extension",
"description": "This is my extension",
"permissions": [
"tabs"
],
"background": {
"scripts": [
"js/background.js"
"js/jquery-3.4.1.min.js"
],
"persistent": true
}
}
Content Scripts are run on specific web pages and can interact with a website’s DOM. To define a content script, I added a content_scripts section to the manifest file.
{
"manifest_version": 2,
"version": "0.1",
"name": "My Extension",
"description": "This is my extension",
"permissions": [
"tabs"
],
"background": {
"scripts": [
"js/background.js"
"js/jquery-3.4.1.min.js"
],
"persistent": true
},
"content_scripts": [
{
"matches": [
"*://*.youtube.com/*"
],
"js": [
"js/jquery-3.4.1.min.js",
"js/content.js"
],
"run_at": "document_end"
}
]
}
The "matches": [ "*://*.youtube.com/*" ]
section tells Chrome to run the content scripts when the URL of the website matches the values specified. The "run_at": "document_end"
section ensures that the content scripts are run after the page is loaded.
Chrome Extensions have changed since I first created this project but it was still a valuable experience.
🐢
]]>