☠️
smadi0x86 Playground
  • 💀Welcome to smadi0x86 Playground
    • 🍷Resources
    • 🚬Projects
    • 🎓Certifications
    • 📌Pinned
    • ❓Questions
    • 📞Contact
  • 🏞️Cloud Native
    • Docker
      • Quick Reference
      • Introduction
      • Containers
      • Images
      • Storage & Volumes
      • Security
      • Cheatsheet
    • Git
    • Serverless Framework
    • YAML
  • 🔨Software Engineering
    • System Design
    • Environment Variables
    • JSON Web Tokens
  • 👾Architecture
    • C Language
      • Introduction
      • GCC Compilation
      • Libraries & Linking
      • I/O
      • Files
      • Pointers
      • Dynamic Memory Allocation
      • Data Types
      • Strings Manipulation
      • Bit Manipulation
      • Pre-processors
      • Macros
      • Type Qualifiers
    • C/C++ Build Systems
      • Fundamentals for Linking
      • Symbolic Linking
      • Cross-Platform Compilation
      • CMake for Building and Linking
      • Shared Libraries
      • Dynamic Linking and Dependency Management
    • Operating Systems
      • OS & Architecture
      • Processes
      • CPU Scheduling
      • Memory Management
  • 🛩️Cyber Warfare
    • Flight Physics
    • Communication
      • PWM & PPM
      • MAVLink
  • 🏴‍☠️Offensive Security
    • Active Directory
      • Introduction
    • Web Attacks
      • Server Side
        • OS Command Injection
        • Information Disclosure
        • Directory Traversal
        • Business Logic
        • Authentication
        • File Upload
        • SSRF
      • Client Side
        • CSRF
        • XSS
    • Recon
      • Active
        • Host discovery
        • Nmap
        • Mass Scan
      • Passive
        • Metadata
      • Web Applications
        • Discovery
        • Subdomains & Directories
        • SSL Certs
        • CMS
        • WAF Detection
      • Firewall Evasion
  • Binary Exploitation
    • Stack Smashing
      • x86
      • x86_64
    • pwntools
      • Processes and Communication
      • Logging and Context
      • Cyclic
      • Packing
      • ELF
      • ROP
  • 😈Advanced Persistent Threat
    • C2
      • Sliver
    • Malware
      • Windows Internals
        • PEB
      • Academy
        • Basics
      • Sektor7
        • Essentials
  • 💌Certifications
    • AWS Certified Cloud Practitioner (CLF-C01)
      • Cloud Foundations
      • Domain 1: Cloud Concepts
      • Domain 2: Security and Compliance
      • Domain 3: Technology
      • Domain 4: Billing and Pricing
    • AWS Certified Solutions Architect - Associate (SAA-C03)
      • Foundation
    • Certified Kubernetes Administrator (CKA)
      • Core Concepts
      • Scheduling
      • Logging & Monitoring
      • Application Lifecycle Management
      • Cluster Maintenance
      • Security
      • Storage
      • Networking
      • Design Kubernetes Cluster
      • Kubernetes The Kubeadm Way
      • Troubleshooting
      • JSONPATH
      • Lightning Lab
      • Mock Exams
      • Killer Shell
    • Certified Kubernetes Security (CKS)
      • Foundation
      • Cluster Setup
      • Cluster Hardening
      • Supply Chain Security
      • Runtime Security
      • System Hardening
      • Killer Shell
    • (KGAC-101) Kong Gateway Foundations
      • Introduction to APIs and API Management
      • Introduction to Kong Gateway
      • Getting Started with Kong Enterprise
      • Getting Started with Kong Konnect
      • Introduction to Kong Plugins
  • 📜Blog Posts
    • Modern Solutions For Preventing Ransomware Attacks
Powered by GitBook
On this page
  • Containerization
  • Docker
  • Comparing Virtual Machines and Containers
  • Virtual machines
  • Containers
  • Getting Docker set up and running
  • Choosing which Docker product based on requirements
  • Installing Docker
  • Hello World in Docker
  • Docker Engine
  • Docker Architecture
  1. Cloud Native
  2. Docker

Introduction

A lightweight, stand-alone package that contains everything needed to run a piece of software, including the code, runtime, system tools, system libraries, and settings.

PreviousQuick ReferenceNextContainers

Last updated 1 year ago

Containerization

Containerization is the process of encapsulating software code along with all of its dependencies inside a single package so that it can be run consistently anywhere.

Docker

Docker is an open source containerization platform. It provides the ability to run applications in an isolated environment known as a container.

Containers are like very lightweight virtual machines that can run directly on our host operating system's kernel without the need of a hypervisor. As a result we can run multiple containers simultaneously.

Each container contains an application along with all of its dependencies and is isolated from the other ones. Developers can exchange these containers as image(s) through a registry and can also deploy directly on servers.

Comparing Virtual Machines and Containers

Virtual machines

  • A virtual machine is the emulated equivalent of a physical computer system with their virtual CPU, memory, storage, and operating system.

  • A program known as a hypervisor creates and runs virtual machines. The physical computer running a hypervisor is called the host system, while the virtual machines are called guest systems.

  • The hypervisor treats resources — like the CPU, memory, and storage — as a pool that can be easily reallocated between the existing guest virtual machines.

There are two types of hypervisors:

Type 1 Hypervisor (VMware vSphere, KVM, Microsoft Hyper-V).

Type 2 Hypervisor (Oracle VM VirtualBox, VMware Workstation Pro/VMware Fusion).

Containers

  • A container is an abstraction at the application layer that packages code and dependencies together.

  • Instead of virtualizing the entire physical machine, containers virtualize the host operating system only.

  • Containers sit on top of the physical machine and its operating system. Each container shares the host operating system kernel and, usually, the binaries and libraries, as well.

What's Different?

VMs

Containers

Size

Heavyweight (GB)

Lightweight(MB)

Boot Time

Startup time in minutes

Startup time in seconds

Performance

Limited performance

Native performance

OS

Each VM runs in its own OS

All containers share the host OS

Runs on

Hardware-level virtualization(Type1)

OS virtualization

Memory

Allocates required memory

Requires less memory space

Isolation

Fully isolated and hence more secure

Process-level isolation, possibly less secure

Getting Docker set up and running

Choosing which Docker product based on requirements

In a production environment that runs containers hosting critical applications, you would rather have your favorite admins install Docker Enterprise.

However, on your development machine or a continuous integration build machine, you can use the free Docker Engine Community or Docker Desktop depending on your machine type.

Use

Product

Developer machine

Docker Engine Community or Docker Desktop

Small server, small expectations

Docker Engine Community

Serious stuff/Critical applications

Docker Engine Enterprise or Kubernetes

Installing Docker

To install Docker Engine, you need the 64-bit version of one of these Ubuntu versions:

  • Ubuntu Focal 20.04 (LTS)

  • Ubuntu Bionic 18.04 (LTS)

  • Ubuntu Xenial 16.04 (LTS)

Docker Engine is supported on x86_64 (or amd64), armhf, and arm64 architectures.

Remove old versions

Older versions of Docker were called docker, docker.io, or docker-engine.

If these are installed, uninstall them:

$ sudo apt-get remove docker docker-engine docker.io containerd runc

It’s OK if apt-get reports that none of these packages are installed.

Installation methods

You can install Docker Engine in different ways, depending on your needs:

Install using the repository

Before you install Docker Engine for the first time on a new host machine, you need to set up the Docker repository. Afterward, you can install and update Docker from the repository.

Set up the repository

Update the apt package index and install packages to allow apt to use a repository over HTTPS:

$ sudo apt-get update

$ sudo apt-get install \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg-agent \
    software-properties-common

Add Docker’s official GPG key:

$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

Verify that you now have the key with the fingerprint 9DC8 5822 9FC7 DD38 854A E2D8 8D81 803C 0EBF CD88, by searching for the last 8 characters of the fingerprint.

$ sudo apt-key fingerprint 0EBFCD88

pub   rsa4096 2017-02-22 [SCEA]
      9DC8 5822 9FC7 DD38 854A  E2D8 8D81 803C 0EBF CD88
uid           [ unknown] Docker Release (CE deb) <docker@docker.com>
sub   rsa4096 2017-02-22 [S]

Install docker engine

Update the apt package index, and install the latest version of Docker Engine and containerd, or go to the next step to install a specific version:

 $ sudo apt-get update
 $ sudo apt-get install docker-ce docker-ce-cli containerd.io

If you would like to use Docker as a non-root user, you should now consider adding your user to the “docker” group with something like:

sudo usermod -aG docker your-user

Remember to log out and back in for this to take effect!

Hello World in Docker

Now that we have Docker ready to go on our machines, it's time for us to run our first container. Open up terminal and run following command:

docker run hello-world

If everything goes fine you should see some output like the following:

docker run hello-world

Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
0e03bdcc26d7: Pull complete 
Digest: sha256:49a1c8800c94df04e9658809b006fd8a686cab8028d33cfba2cc049724254202
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/

To understand what just happened, you need to get familiar with the Docker Architecture, Images and Containers, and Registries.

Docker Engine

Docker Engine is a client-server application with these major components:

  • A server which is a type of long-running program called a daemon process (the dockerd command).

  • A REST API which specifies interfaces that programs can use to talk to the daemon and instruct it what to do.

  • A command line interface (CLI) client.

Docker Architecture

Docker’s architecture is also client-server based. However, it’s a little more complicated than a virtual machine because of the features involved.

It consists of four main parts:

  1. Docker Client: This is how you interact with your containers. Call it the user interface for Docker.

  2. Docker Objects: These are your main components of Docker: your containers and images. We mentioned already that containers are the placeholders for your software, and can be read and written to. Container images are read-only, and used to create new containers.

  3. Docker Daemon: A background process responsible for receiving commands and passing them to the containers via command line.

  4. Docker Registry: Commonly known as Docker Hub, this is where your container images are stored and retrieved.

Docker objects are the docker images, containers, volumes, and networks.

OS requirements

Most users and install from them, for ease of installation and upgrade tasks. This is the recommended approach.

Some users download the DEB package and and manage upgrades completely manually. This is useful in situations such as installing Docker on air-gapped systems with no access to the internet.

In testing and development environments, some users choose to use automated to install Docker.

🔗
set up Docker’s repositories
install it manually
convenience scripts
🏞️
Page cover image