Placeholder Image

Subtitles section Play video

  • >> SPEAKER 1: Hello and welcome to this complete, Docker course,

  • by the end of this course,

  • we'll have a deep understanding of all the main concepts

  • and also a great big picture overview of how

  • Docker is used in the whole software development process.

  • The course is a mix of animated theoretic explanations,

  • but also hands on demos for you to follow along.

  • So get your first hands on experience and confidence

  • using Docker in your projects.

  • So let's quickly go through the topics

  • I'll cover in this course.

  • We will start with the basic concepts of what Docker actually

  • is and what problems it solves.

  • Also will understand the difference between

  • Docker and virtual machine. And after installing Docker,

  • we will go through all the main documents to start

  • and stop containers, debugging containers, etc..

  • After that,

  • we'll see how to use Docker in practice

  • by going through a complete workflow with a demo project.

  • So first we'll see how to develop

  • locally with containers.

  • Then we'll run multiple containers or services with Docker

  • Compose, we'll build our own Docker image with Dockerfile

  • and will push that built image into a private

  • docker repository on AWS and finally will deploy our

  • containerized application. Last but not least,

  • we look at how to persist data in Docker,

  • learning the different volume types and afterwards configure

  • persistance for our demo project.

  • If you get stuck anywhere,

  • just comment under the video and I will try my

  • best to answer your questions. Also,

  • you can join the private tech world.

  • We're not a community group on Facebook,

  • which is there to exchange your knowledge

  • with others and connect with them,

  • if you like the course by the end of the video.

  • Be sure to subscribe to my channel for more

  • related content. So let's get started.

  • So we'll talk about

  • what a container

  • is and what problems it solves.

  • We will also look at a container repository,

  • which is basically a storage for containers.

  • We'll see how a container can actually make

  • the development process much easier and more efficient

  • and also how they solve some of the problems

  • that we have in the deployment process of applications.

  • So let's dive right into it. What a container is,

  • a container is a way to package applications with everything

  • they need inside of the package,

  • including the dependencies and all the configuration necessary.

  • And that package is portable,

  • just like any other artifact is in.

  • That package can be easily shared and moved around

  • between a development team or development

  • and operations team.

  • And that portability of containers plus everything

  • packaged in one isolated environment gives it some

  • of the advantages that makes development

  • and deployment process more efficient.

  • And we'll see some of the examples of how

  • that works in later slides, so as I mentioned,

  • containers are portable,

  • so there must be some kind of a storage for those

  • containers so that you can share them and move them around.

  • So containers leave in a container repository.

  • This is a special type of storage for containers.

  • Many companies have their own private repositories where

  • the host or the way they store all the containers

  • and this will look something like this where

  • you you can push all of the containers that you have.

  • But there is also a public repository for docker

  • containers where you can browse and probably

  • find any application container that you want.

  • So let's head over to the browser and see

  • how that looks like.

  • So if I hear search for a Docker,

  • which is the name of the public repository for Docker,

  • I will see this official web. So here, if you scroll down,

  • you see that they're.

  • More than a hundred thousand container images

  • of different applications hosted or stored

  • in this repository.

  • So here you see just some of the examples.

  • And for every application,

  • there is this official docker container or container image.

  • But if you are looking for something else,

  • you can search it here.

  • And I see there is an official image for, let's say, Jenkins',

  • but there's also a lot of non official images or container

  • images that developers or for or even

  • from Jenkins' itself, they actually store it here.

  • So public repository is where you usually get started

  • when you're using or when you're starting

  • to use the containers where you can find any

  • application image.

  • So now let's see how container's improved

  • the development process by specific examples,

  • how did we develop applications before the containers?

  • Usually when you have a team of developers

  • working on some application,

  • you would have to install most of the services on your

  • operating system directly. Right. For example,

  • you you're developing some JavaScript

  • application and you need to be cool and ready for messaging.

  • And every developer in the team would then have

  • to go and install the binaries of those services and.

  • Configure them and run them on their local development

  • environment and depending on which operating

  • system they're using,

  • the installation process will look actually different.

  • Also, another thing with installing services

  • like this is that you have multiple steps of installation.

  • So you have a couple of commands that you have to execute.

  • And the chances of something going wrong and error

  • happening is actually pretty high

  • because of the number of steps required to install

  • each service.

  • And this approach or this process of setting

  • up a new environment can actually be pretty tedious,

  • depending on how complex your application is.

  • For example,

  • if you have 10 services that your application is using,

  • then you would have to do that 10 times on each

  • operating system environment.

  • So now let's see how containers solve some of these

  • problems with containers.

  • You actually do not have to install any of the services

  • directly on your operating system because the container

  • is its own isolated operating system layer

  • with Linux based image.

  • As we saw in the previous slides,

  • you have everything packaged in one isolated environment.

  • So you have the postgresql with the specific version

  • packaged with a configuration in the start script

  • inside of one container. So as the developer,

  • you have to go and look for the binaries to download

  • on your machine,

  • but rather you just go ahead and check out

  • the container repository to find that specific

  • container and download on your local machine.

  • And the download step is just one docker command

  • which fetches the container and starts it at the same time.

  • And regardless of which operating

  • system you're on, the command,

  • the doc recommend for starting the container

  • will not be different. It will be the exactly the same.

  • So we have 10 applications that your JavaScript

  • application uses and depends on.

  • You would just have to run 10 docker commands for each

  • container and that will be it.

  • Which makes the setting up your local development environment

  • actually much easier and much more efficient

  • than the previous version. Also,

  • as we saw in the demonstration before,

  • you can actually have different versions of the same

  • application running on your local environment

  • without having any conflict.

  • So now let's see how container's can improve

  • the deployment process before the containers,

  • a traditional deployment process will look like this.

  • Development team will produce artifacts together with a set

  • of instructions of how to actually install

  • and configure those artifacts on the server.

  • So you would have a jar file or something

  • similar for your application. And in addition,

  • you would have some kind of a database service or some

  • other service also with a set of instructions of how

  • to configure and set it up on the server.

  • So development team would give those artifacts

  • over to the operations team and the operations team will

  • handle setting up the environment to deploy

  • those applications. Now,

  • the problem with this kind of approach is that,

  • first of all,

  • you need to configure everything and install

  • everything directly on the operating system,

  • which we saw in the previous example that could actually

  • lead to conflicts with dependency version

  • and multiple services running on the same host.

  • In other problems that could arise

  • from this kind of process is when there

  • is misunderstanding between the development team

  • and operations because everything

  • is in a textual guide as instructions.

  • So there could be cases where developers forget

  • to mention some important point about configuration.

  • Or maybe when operations team misinterpreted some of those

  • instructions and when that fails,

  • the operations team has to go back to the developers

  • and ask for more details.

  • And this could lead to some back and forth

  • communication until the application

  • is successfully deployed on the server with containers.

  • This process is actually simplified because, no,

  • you have the developers and operations working

  • in one team to package the whole configuration

  • dependencies inside the application,

  • just as we saw previously.

  • And since it's already encapsulated in one single

  • environment and you're going to have to configure any

  • of this directly on the server.

  • So the only thing you need to do is run a docker command

  • that pulls that container image that you've stored somewhere

  • in the repository and then run it. This is,

  • of course, a simplified version,

  • but that makes exactly the problem that we saw

  • on the previous slide much more easier.

  • No environmental configuration needed on the server.

  • The only thing, of course,

  • you need to do is you have to install and set up

  • the DOCA runtime on the server before you will be able to run.

  • Container's there, but that's just one time effort.

  • Now that you know

  • what a container concept is,

  • let's look at what a container is technically.

  • So technically container is made up of images.

  • So we have layers of stacked images on top of each other.

  • And at the base of most of the containers,

  • you would have a Linux based image,

  • which is either Alpina with a specific version

  • or it could be some other Linux distribution.

  • And it's important for those base images to be small.

  • That's why most of them are actually Alpine,

  • because that will make sure that the containers

  • stay small in size,

  • which is one of the advantages of using container.

  • So on top of the base image,

  • you would have application image and

  • this is a simplified diagram.

  • Usually you would have these intermediate images

  • that will lead up to the actual application image that is going

  • to run in the container. And of course, on top of that,

  • you will have all this configuration data. So, no,

  • I think it's time to dove into a practical example

  • of how you can actually use a docker container and how

  • it looks like when you install it and downloaded and run

  • it on your local machine.

  • So to give you a bit of an idea of how this works,

  • let's head over to Docker Hub and search

  • for Posterous Keywell.

  • So here,

  • which is a doctor official image,

  • I can see some of the versions and let's say I'm looking

  • specifically for older version. I don't know, nine,

  • six something. So I'm going to pull that one.

  • So this is a doc repository so that I can actually go

  • ahead and pull the containers from the repository directly.

  • And because it's a public repository,

  • I don't have to log in to it.

  • I don't have to provide any authentication

  • credentials or anything.

  • I can just get started with a simple document

  • without doing or configuring anything to access Docker Hub.

  • So on my terminal, I can just do Dr.. Paul,

  • I can even do a run and then just copy

  • the the image name,

  • and if I don't specify any version,

  • it will just give me the latest. But I want a specific version.

  • So I'm just I'm going to go with nine point six, actually,

  • just to demonstrate so I can provide the version

  • like this with a column and I can start. Right. So,

  • as you see, the first line says, unable to find image locally,

  • so it knows that he has to go to Dr.

  • Hub and pull it from there. And the next line says,

  • pulling from Library Posterous.

  • And here you see a lot of hashes that says downloading.

  • And the this is what I mentioned earlier,

  • which is docker containers or any

  • containers are made up of layers. Right.

  • You have the image layer,

  • you have the application layers and so on.

  • So what what do you see here are actually all those layers

  • that are separately downloading from the Docker

  • hub on my machine. Right.

  • The advantage of splitting those applications in layers

  • is that actually, for example,

  • if the image changes or I have to download a newer

  • version of Posterous,

  • what happens is that the layers there are the same

  • between those two applications.

  • Two versions of Posterous will not be downloaded again,

  • but only those layers that are different. So,

  • for example,

  • now it's going to take around 10 or 15 minutes to download

  • this one image because I don't have any posterous locally.

  • But if I were to download the next version,

  • it will take a little bit less time because some layers

  • already exist on my local machine.

  • So now you see that it's already logging because it this comment

  • that I read here,

  • the doctor run with the container

  • name and version,

  • it fetches or it pulls the in the container,

  • but it also starts it.

  • So it executes the start script right away

  • as soon as it downloads it.

  • And here you see the output of this starting

  • of the application.

  • So it just gives some output about starting the server

  • and doing some configuration stuff.

  • And here you see database system is ready to accept

  • connections and launch it started.

  • So now let's open the new tab and see with Dr. Pascaline.

  • You can actually see all the running containers.

  • So here you see that postcrisis nine six

  • is running and it actually says image.

  • So this is another important thing to understand when we're

  • talking about containers, there are two technical terms,

  • image and a container.

  • And a lot of people confuse those two, I think.

  • And there is actually a very easy distinction

  • between the two. So images,

  • the actual package that we saw in one of those previous slides.

  • So the application package,

  • together with the configuration and the dependencies

  • and all these things,

  • this is actually the artifact that is movable

  • around is actually the image.

  • Container is when I pull that image on my local machine

  • and I actually started so the application inside

  • actually starts that creates the container environment.

  • So if it's not running, basically it's an image.

  • It's just an artifact that's lying around if I started

  • and actually run it on my machine.

  • It is a container. So that is the distinction.

  • So here it says the active running containers

  • with a container ID image that it's running

  • from and some entry commands that it executes

  • and some other status information.

  • So this means that PostgreSQL is now

  • running on my local machine. Simple as that,

  • if I were now to need, let's say,

  • another version of Posterous to run at the same

  • time on my local machine, I could just go ahead and say,

  • let's go back and let's say I want to have nine point six

  • and ten point ten running at the same time on my

  • local machine. I would just do run Posterous.

  • And run again, it doesn't find it locally,

  • so it pushes and this is what I actually

  • explained to you earlier,

  • because it's the same application,

  • but with just a different version.

  • Some of the layers of the image are the same.

  • So I don't have to fetch those again because they are already

  • on my machine and it just fetches the layers

  • that are different.

  • So that saves a little bit of time.

  • And I think it's it could be actually good advantage.

  • So now we'll wait

  • for other image layers to load

  • so that we have the second.

  • Postcrisis version running

  • and now you see I have

  • Posterous nine point six

  • running in this command

  • line tab and I have

  • Postgres version ten point

  • ten running in the next one.

  • So I have to postcrisis with different versions

  • and running and I can actually output them here.

  • If both of them running and there's no

  • conflict between those two,

  • I can actually run any number of applications with different

  • versions maybe of the same application with no

  • problem at all.

  • And we are going to go through how to use those

  • containers in your application and the port

  • configuration and some of the other configuration

  • stuff later in this tutorial when we do a deep dove.

  • But this is just for you to get the first visual image of how

  • docker containers actually work, how they look like,

  • and how easily you can actually start them on your

  • local machine without having to implement a specific version

  • of Posterous application and do all the configuration yourself.

  • When I first started learning, Doctor,

  • after understanding some of the main concepts,

  • my first question was, OK,

  • so what is the difference between Docker and an Oracle

  • virtual books, for example?

  • And the difference is quite simple, I think.

  • And in the short video, I'm going to cover exactly that.

  • And I'm going to show you the difference by explaining

  • how DOCA works on an operating system level and then comparing

  • it to how virtual machine works. So let's get started.

  • In order to understand how Docker works

  • on the operating system level,

  • let's first look at how operating system is made up.

  • So operating systems have two layers operating system kernel

  • in the applications layer. So as you see in this diagram,

  • the kernel is the part that communicates

  • with the hardware components like CPU and memory,

  • et cetera,

  • and the applications run on the kernel layer.

  • So they are based on the kernel. So for example,

  • you will know Linux operating system and there are lots

  • of distributions of Linux out there.

  • There's Bonta and Debian and there is Linux meaned, etc.

  • There are hundreds of distributions.

  • They all look different.

  • So the graphical user interface is different.

  • The file system is maybe different.

  • So a lot of applications that you use are different

  • because even though they use the same Linux kernel,

  • they use different or they implement different

  • applications on top of that kernel. So, as you know,

  • Docker and virtual machine,

  • they're both virtualization tools.

  • So the question here is what parts of the operating

  • system they virtualize?

  • So Docker virtualize is the application layer.

  • So when you download a docker image,

  • it actually contains the applications layer

  • of the operating system and some other

  • applications installed on top of it.

  • And it uses the kernel of the host because it doesn't

  • have its own kernel,

  • the virtual box or the virtual machine,

  • on the other hand,

  • has the applications layer and its own kernel.

  • So it virtualize is the complete operating system,

  • which means that when you download a virtual

  • machine image on your host, it doesn't use your host kernel.

  • It puts up its own.

  • So what is this difference between Docker and virtual

  • machine actually mean? So first of all,

  • the size of Docker images are much smaller

  • because they just have to implement one layer.

  • So Docker images are usually a couple of megabytes.

  • Virtual machine images, on the other hand,

  • can be a couple of gigabytes large.

  • A second one is the speed so you can run and start

  • docker containers much faster than the VMS

  • because they every time you start them,

  • you they have to put the operating system kernel

  • and the applications on top of it.

  • The third difference is compatibility,

  • so you can run a virtual machine image of any

  • operating system on any other operating system host,

  • but you can't do that with Docker.

  • So what is the problem exactly?

  • Let's say you have a Windows operating system

  • with a kernel and some applications and you want

  • to run Linux based Docker image on that Windows host.

  • The problem here is that a Linux based,

  • her image might not be compatible

  • with the Windows kernel,

  • and this is actually true for the Windows

  • versions below 10 and also for the older Mac versions,

  • which if you have seen how to install Docker

  • on different operating systems,

  • you see that the first step is to check whether your hosts

  • can actually run Docker natively,

  • which basically means is the kernel compatible

  • with the Docker images? So in that case,

  • a workaround is that you install a technology

  • called Docker Toolbox,

  • which abstracts away the kernel to make

  • it possible for your hosts to run different docker images.

  • So in this video,

  • I will show you how to install DOCA on different

  • operating systems,

  • the installation will differ not only based

  • on the operating system,

  • but also the version of the operating system.

  • So you can actually watch this video selectively,

  • depending on which OS and the version

  • of the OS you have.

  • I will show you how to find out which installation step

  • applies to you in the before installing section,

  • which is the first one. So once you find that out,

  • you can actually directly skip to that part of the video.

  • Where I explain that into details,

  • I will put the minute locations of each part

  • in the description part of the video.

  • And also I will put all the links that I use

  • in the video, in the description,

  • so that you can easily access them. Also,

  • if you have any questions during the video

  • or if you get stuck installing the docker

  • on your system,

  • please post a question or problem in the comments

  • section so that I can get back to you and help

  • you proceed,

  • or maybe someone from the community will.

  • So with that said, let's dove right into it.

  • So if you want to install Tucker,

  • you can actually Google it and you get an official

  • documentation of Docker.

  • It's important to note that there are two

  • editions of Tucker.

  • There is a community and enterprise

  • editions for us to begin with.

  • Community additions will be just fine in the Docker

  • community edition tape there.

  • There is a list of operating systems

  • and distributions in order to install Docker. So,

  • for example, if we start with Menck,

  • we can click in here and we see the documentation

  • of how to install it on Mac,

  • which is actually one of the easiest.

  • But we'll see some other ones as well.

  • So before you install

  • her on your

  • Mac or Windows computer,

  • they are prerequisites to be considered.

  • So for Mac and Windows,

  • there has to be some criteria of the operating

  • system and the hardware met in order to support

  • running docker.

  • If you have Mac go through the system requirements

  • to see if you're a Mac version is actually supporting Docker.

  • If you have Windows,

  • then you can go to the Windows tab and look at

  • the system requirements there or what to know

  • before you install. For example,

  • one thing to note is that Docker natively

  • runs only on Windows 10.

  • So if you have a Windows version,

  • which is less than 10,

  • then Docker cannot run natively on your computer.

  • So if your computer doesn't meet the requirements to run Docker,

  • there is a workaround for that, which is called Docker Toolbox.

  • That of Docker.

  • You basically just have to install a tool box

  • that will become a sort of a bridge between your

  • operating system and the docker,

  • and that will enable you to run on your legacy computer.

  • So if that applies to you,

  • then skip ahead in this video to the part where

  • I explain how to install Docker toolbox on Mac and on Windows.

  • So let's install Dr. for Formic, as you see here,

  • there are two channels that you can download

  • the binaries from or the application from.

  • We will go with the stable channel.

  • And other thing to consider,

  • if you have an older version of make the software

  • or the hardware,

  • please go through the system requirements to see if you can

  • actually install Docker.

  • So here there is a detailed description of what make

  • version you need to be able to run Docker and also

  • you need at least four gigabytes of RAM

  • and by installing Docker you will actually have

  • the whole package of it, which is a docker engine,

  • which is important or which is necessary to run

  • the containers on your laptop, the Docker command line client,

  • which will enable you to execute some documents Docker composed.

  • If you don't know it yet, don't worry about it.

  • But it's just technology to orchestrate if you have

  • multiple containers and some other stuff that we're not

  • going to need in this tutorial.

  • But you will have everything in a package installed.

  • So go ahead and download the stable version. Well,

  • I already have Docker installed from the channel,

  • so I won't be installing it again.

  • But it shouldn't matter because the steps

  • of installation are the same for both.

  • So once the Doctor DMC file is downloaded,

  • you just double click on it and it will pop up

  • this window just like the doctor well up

  • into the applications and it will be installed on your

  • Mac is the next step.

  • You will see doctor installed in your applications.

  • So you can just go ahead and. Started.

  • So as you see,

  • the doctor sign or icon is starting here,

  • if you click on it,

  • you see the status that doctor is running.

  • And you can configure some preferences and check the Docker

  • version and so on.

  • And if you want to stop Dakara or Quited on your mic,

  • you can just do it from here.

  • And important maybe interesting note here

  • is that if let's say you download or install

  • Docker and you have more than one

  • accounts on your laptop.

  • You'll actually get some errors or conflicts

  • if you run occur at the same time or multiple accounts.

  • So what I do, for example,

  • is that if I switch to another account where

  • I'm also going to need Docker, I quit it from here.

  • And then I started from other accounts

  • so that I don't get any errors.

  • So that may be something you need to consider if you use

  • multiple accounts.

  • So let's see how

  • to install windows, the first step,

  • as I mentioned before,

  • is to go to that before you install section and to see

  • that your operating system and your computer meets all

  • the criteria to run Docker natively.

  • So if you are installing DOCA for the first time,

  • don't worry about most of these parts

  • like Docker Toolbox and Docker Machine.

  • There are two things that are important.

  • One is to double check that your Windows version

  • is compatible for Docker and the second one is to have

  • virtualization enabled. Virtualization is by default,

  • always enabled other than you manually disabled it.

  • So if you're unsure,

  • then you can check it by going to the task manager performance

  • CPU tab and here you can see the status

  • of the virtualization.

  • So once you have checked that and made sure that these

  • two prerequisites are met,

  • then you can scroll up and download the Windows

  • installer for from the stable channel.

  • Once the installer is downloaded,

  • you can just click on it and follow

  • the installation wizard to install Windows once

  • the installation is completed.

  • You have to explicitly start Docker because it's not

  • going to start automatically. So for that,

  • you can just go and search for the doctor

  • for Windows app on your windows.

  • Just click on it and you will see that Dr.

  • Weil Aykan starting. And if you click on that icon,

  • you can actually see the status that says

  • stalker is now up and running.

  • So this is basically it for the installation.

  • Now let's see how

  • to install Tucker on different

  • Linux distributions,

  • and this is where things get a little bit more complicated.

  • So first of all,

  • you see that in the menu on the on the left,

  • you see that four different Linux distributions,

  • the installation steps will differ. But also,

  • for example,

  • if we just click on Bonta for the guide,

  • you can see that in the prerequisites section,

  • there is also differentiation between the versions of the same

  • Linux distribution.

  • And there may be some even more complicated scenarios

  • where the combination of the version

  • of the distribution and the architecture

  • it's running in also makes some difference

  • into how to set up Docker on that specific environment.

  • Because of that,

  • I can't go through a docker installation process of every

  • Linux environment because they're just

  • too many combinations. So instead,

  • what we'll do is just go through a general overview

  • of the steps and configuration process to get Docker

  • running on your Linux environment.

  • And you can just adjust it then for your

  • specific set up.

  • So these are some general steps to follow in order

  • to install Dakara on your Linux Linux environment.

  • First of all,

  • you have to go through the operating system

  • requirements part on the relevant Linux

  • distribution that applies for you.

  • A second step in the documentation

  • to is to install old versions. However,

  • if it's the first time you installing Docker,

  • then you don't have to worry about that.

  • You also don't have to worry about the supported storage

  • drivers and you can skip ahead to the part of installing

  • Docker community addition.

  • So for any Linux distribution here,

  • the steps will be or the options for installing

  • Docker will be the same.

  • So first option is basically to set up a repository

  • and download the docker from and install

  • it from the repository.

  • The second option is to install the packages manually. However,

  • I wouldn't recommend it,

  • and I think the documentation doesn't recommend it either,

  • because then you will have to do a lot of steps

  • of the installation in the maintenance

  • of the versions manually. So I wouldn't do that.

  • The third one is just for the testing purposes.

  • It may be enough for the development

  • purposes as well, but I would still not do it,

  • which is basically just downloading some

  • automated scripts that will install and set up

  • Docker on your Linux environment. However,

  • again, I wouldn't go with it.

  • I would actually just do the first option,

  • which is just downloading the docker from the repository.

  • So in order to install Docker using the first option,

  • which is downloading it from the Dockers repositories,

  • you have two main steps.

  • So the first one is to set up the repository,

  • which differs a little bit depending

  • on which distribution you have,

  • and then install the democracy from that repository.

  • So from Ubuntu and deepen the steps for setting up

  • the repository are generally just updating your package,

  • then setting up in connection with the repository and adding

  • the Ducker's official GPG key, which only wanto in Debian need.

  • You don't have to do these steps for scintillation fedora there.

  • You have to install the required packages

  • in the last step for setting up the repository

  • is basically setting up the stable repository of Docker,

  • which we saw previously on the overview

  • that there are two channels which is a stable and edge here.

  • You always have to set up the stable repository,

  • optionally you can also set up the edge repository.

  • But I would just do stable this time.

  • And here also something to notice.

  • Depending on the architecture,

  • you have to actually set it or you have to set

  • that as a parameter when you set up the repository.

  • So if you have, for example, a different architecture,

  • you can use those steps to display the correct

  • comen for it.

  • And I guess that applies to other Linux

  • distributions as well. Like, for example,

  • here you also have the second tab where you see

  • a separate come in for it.

  • So these steps should actually set up the repository.

  • So as a next step,

  • you can then install the democracy

  • from those repositories.

  • So installing DOCA from the set up repository

  • is actually pretty straightforward,

  • those steps are same for or similar to all

  • the distributions,

  • basically just update the package

  • and then you just say install Toker C. So this command will

  • just download the latest version.

  • If you want to install a specific one,

  • which you will need to do in a production environment,

  • then you can just provide a version like this.

  • You just say Toker minus C equals some specific versions.

  • And using this command,

  • you can actually look up what versions are available

  • in that repository that you just and with this command,

  • actually Docker will be installed on your Linux

  • environment and then you can just

  • verify using pseudo docker run Hello World,

  • which is this demo image of Docker.

  • You can verify that DOCA is running and this will start.

  • Hello, world docker container on your environment.

  • So as I mentioned previously,

  • for environments that do not support running docker natively,

  • there is an workaround which is called Docker Toolbox.

  • So Docker Toolbox is basically an installer for Docker

  • environment set up on those systems.

  • So this is how to install Dr. Toolbox on your Mac,

  • this is the whole package that comes

  • with the installation of Dr. Toolbox,

  • which is basically the Docker command line Docker machine. Dr.

  • Campos, basically all the packages that we saw

  • in the native installation. And on top of that,

  • you also get the Oracle virtual box.

  • So in order to install the toolbox,

  • it's actually pretty straightforward.

  • On this website,

  • you can go to the toolbox releases.

  • We have a list of latest releases.

  • You just take the latest release.

  • And here you see to Essence, this one is for Windows,

  • obviously, and you just download the package for Mac.

  • And once it's downloaded,

  • you just click on it and go through the installation wizard.

  • Leave all the options by default as they are,

  • do not change anything.

  • And after the installation you can just validate

  • the installation is successful and you can actually run docker.

  • So after seeing the installation with successful screen,

  • just go and look up in your launch pad QuickStart

  • terminal and once you open it,

  • you should be able to run documents and you can

  • just try to run Hello World,

  • which should just start up or bring up this hill

  • world docker container on your environment.

  • So now let's see

  • how to install Dr. Toolbox and Windows,

  • here's to you that you get the whole package

  • of Toker Technologies with a toolbox

  • which are basically the same package which you get

  • on the native Docker installation.

  • And on top of that, you get Oracle VM Virtual Box,

  • which is the tool that enables Docker to

  • run on an older system.

  • So before you install the toolbox,

  • you have to make sure that you meet some

  • of the preconditions. Number one,

  • you have to make sure your Windows system supports

  • virtualization and that virtualization

  • must be enabled.

  • Otherwise DOCA docker won't start.

  • So depending on which Windows version you have,

  • looking up or checking the virtualization

  • status will be different.

  • So I used to suggest you Google it and look it up

  • of how to find the virtualization status

  • to see that it's enabled once you have that checked.

  • Also make sure that your Windows operating

  • system is 64 bits.

  • So if those two criteria are met,

  • then you can go ahead and install the doctor toolbox.

  • The place where you see the releases

  • or they release artifacts is toolbox releases link here,

  • which I have open.

  • So it's basically a list of the releases.

  • You just take the latest one, which has two artifacts.

  • This is the one for Windows.

  • You just download this executable file,

  • click on it and go through the installation

  • wizard once the installation is completed there.

  • Just a couple of steps here.

  • You can verify that Docker was installed

  • or the toolbox was installed by just looking up

  • the Docker QuickStart terminal on your windows

  • that it must be installed.

  • And once you click on it and open it,

  • you should be able to run documents in the terminal.

  • So the basic docker recommend that you can test

  • will be Docker Run Halo World,

  • which will just fetch this basic docker container

  • from the public registry and run it on your computer.

  • If that command is successful,

  • it means that Docker was successfully installed

  • on your computer and now you can proceed with the tutorial.

  • So in this video,

  • I'm going to show you some basic documents

  • at the beginning,

  • I'm going to explain what the difference

  • between container and images,

  • because that's something a lot of people confuse.

  • Then very quickly go through version and take

  • and then show you a demo of how to use the basic documents,

  • commands that will be enough to pull an image

  • locally to start a container,

  • to configure a container and even debug the container.

  • So with that said, let's get started.

  • So what is the difference between container and image,

  • mostly people use those terms interchangeably,

  • but actually there is a fine difference

  • between the two to see. Theoretically,

  • teener is just the part of a container runtime.

  • So container is the running environment for an image.

  • So as you see in this graphic,

  • the application image that runs the application

  • could be Postgres redis. Some other application needs,

  • let's say,

  • a file system where it can save the log files or where you can

  • store some configuration files.

  • It also needs the environmental configuration like environmental

  • variables and so on.

  • So all this environmental stuff are provided

  • by container and container also has a port that is binded to it,

  • which makes it possible to talk to the application

  • which is running inside of a container.

  • And of course,

  • it should be noted here that the file system

  • is virtual in container.

  • So the container has its own abstraction

  • of an operating system,

  • including the file system and the environment,

  • which is of course different from the file system

  • and environment of the host machine.

  • So in order to see the difference between

  • container and image in action,

  • let's head over to the Docker hub and find, for example,

  • a 3D image.

  • Another thing is that Docker hub,

  • all the artifacts that are in the Docker

  • hub are images.

  • So we're not talking about containers here.

  • All of these things are images, Docker, official image.

  • So we're going to go ahead and pull a 3D image out

  • of the Docker hub to my laptop.

  • So you see the different layers of the image

  • are downloaded.

  • And this will take

  • a couple of minutes.

  • So once the download is complete,

  • I can check all the existing images on my

  • laptop using Docker images come in.

  • So I see I have two images,

  • Redis and Postgres with text images and so on.

  • Another important aspect of images

  • is that they have tags or versions. So,

  • for example, if we go back to the dog cup,

  • each one,

  • each image that you look up in the Docker hub will

  • have many different versions.

  • The latest is always the one that you get

  • when you don't specify the version. Of course,

  • if you have a dependency on a specific version,

  • you can actually choose the version you want

  • and specified and you can select one from here.

  • So this is what you see here.

  • The tag is basically the version of the image.

  • So I just downloaded the latest and I can also

  • see the size of the image. So now to this point,

  • we have only worked with images,

  • there is no container involved and there is no redis running.

  • So now let's say I need redis running so that my

  • application can connect to it.

  • I'll have to create a container of that 3D image

  • that will make it possible to connect

  • to the reddish application.

  • And I can do it by running the red image.

  • So if I say docker run credits.

  • This will actually start the image in a container,

  • so as I said before,

  • container is a running environment of an image.

  • So now if I open a new tab and the doctor says I will

  • get stares of all the running docker containers so I can see

  • the container release is running with a container ID based

  • on the image of redness and some other information about it,

  • for example, the port that it's running on.

  • And so.

  • So as you see here, the doctor run ready to come in,

  • will start the race container in the terminal

  • in an attached mode. So, for example,

  • if I were to terminate this with the control see

  • you see the redis application stops

  • and the container will be stopped as well.

  • So if I do dock trips again,

  • I see that no container is running.

  • So there is an option for Docker Run command

  • that makes it able makes it possible to run

  • the container in a detached mode and that is minus deep.

  • So if I do run minus redis.

  • I will just get the idea of the container

  • is an output and the container will stop running.

  • So if we check again. Yes,

  • I see the container with ID starting with eight three eight,

  • which is the same thing here is running.

  • So this is how you can start it in the detached mode now.

  • For example,

  • if you would want to restart a container

  • because I don't know,

  • some application crushed inside or some error happened.

  • So you want to restart it, you would need the container ID.

  • So just the first part of it, not the whole string.

  • And you can simply say, Docker,

  • stop idea of the container and that will

  • stop the container.

  • I think running if you want to start it again,

  • you can use the same ID to start the game.

  • So let's say you stop

  • docker container at the end of the day,

  • you go home, you come back the next day,

  • open your laptop and you want to restart the stalled container.

  • Right. So if you do is there it's, uh,

  • the output is empty. You don't see any containers.

  • So what you can do alternative to just looking up your history.

  • Menlyn History is you can do Docker P. S minus A, which will

  • show you all the containers which are running

  • or not running.

  • So here you see the container idea again

  • and you can restarted.

  • OK, so let's try another thing,

  • let's say you have two parallel applications

  • that both use Redis, but in different versions,

  • so you would need to redis containers with different image

  • versions running on your laptop. Right, at different times maybe,

  • or at the same time. So here we have the latest one,

  • which is redis five zero six.

  • And let's head over to the Docker hub

  • and select version.

  • Let's say you need version four point. Oh,

  • so remember the first time that we download

  • the reddest image we did Dr. Paul Radice. However,

  • if you run Docker,

  • if you use a docker run with Redis image and the tech,

  • which was four point o, it will pull the image

  • and start the container right away after it.

  • So it does two commands basically in one.

  • So it's docker,

  • pull that docker start in one command.

  • So if I do this it says it can find the image locally.

  • So it goes and pulls the image from the repository

  • to my laptop.

  • And again,

  • we see some leaders are downloaded and the container

  • is started right away, and no, if I do, yes,

  • you see that I have two races running.

  • So this is where it gets interesting. Now,

  • how do you actually use any container that you just started?

  • So in this output,

  • we you also see the ports section,

  • which specifies on which port the container is listening

  • to the incoming requests.

  • So both containers open the same port,

  • which is what was specified in the image. So.

  • In the logs of the container,

  • you can see the information running boats and loan

  • Port six three seven nine.

  • So how does that actually work and how do we not

  • have conflicts while both are running on the same port?

  • So to explain that,

  • let's head over to our slide and see how this works is,

  • you know,

  • container is just the virtual environment

  • running on your host.

  • And you can have multiple containers running

  • simultaneously on your host, which is your laptop, PC,

  • whatever you're working on.

  • And your laptop has certain ports available that you can

  • open for certain applications.

  • So how it works is that you need to create

  • a so-called binding between a port that your laptop,

  • your host machine has and the container. So,

  • for example,

  • in the first container part here,

  • you see container is listening on Port 5000

  • and you find your laptop's port.

  • Five thousand to that containers.

  • Now you will have conflict if you open to five

  • thousand ports on your host because you will get a message.

  • The port is already bound or is already in use.

  • You can do that.

  • However, you can have two containers,

  • as you see in the second and third containers are both

  • listening and port three thousand,

  • which is absolutely OK as long as you're buying

  • them to two different ports from your host machine.

  • So once the port binding between the host

  • and the container is already done,

  • you can actually connect to the running container

  • using the port of the host. So in this example,

  • you you would have some F localhost and then the port

  • of the host and the host then will know how

  • to forward the request to the container

  • using the port binding. So if we head back here,

  • you see that containers have their ports and they're

  • running on the same one. However,

  • we haven't made any binding between my laptop's

  • ports and the container port. And because of that,

  • the container is basically unreachable by any application.

  • So I won't be able to use it.

  • So the way we actually do that is by specifying

  • the binding of the ports during the run command.

  • So I'm going to break this and check that there

  • is just one container running. No, I'm going to.

  • Stop the other one as well so we can start them anew. OK,

  • so we see both containers are here.

  • So now we want to start them using the binding between

  • the host and the container ports.

  • But again, we have to it.

  • So we need to bind them to two different ports on my laptop.

  • So the way to do it is you do run

  • and you specify with minus P, the port of the host.

  • That's the first one. So let's go with 6000.

  • It doesn't really matter in this case,

  • and the second one is the port that you're binding this to,

  • which is the container port.

  • So we know the container port will be six three seven nine.

  • And this is where we find our.

  • So my laptop's port six thousand two.

  • And if I do this this year.

  • So, you know, if the groups.

  • That's actually clean

  • this mess again here

  • you see the binding here. All right,

  • so your laptop's six thousand port is bound to the containers,

  • six three seven nine.

  • So now let's do another thing and let's

  • start it in a detached mode like this. Let's check again.

  • It's running again and no,

  • I want to start the second container,

  • it's clear this again.

  • So here you see it created

  • a bunch of containers because, uh,

  • when I specified different options with the port binding,

  • it actually created new containers.

  • That's why you see a couple of more here.

  • So I'm going to copy the image name with the take for, uh oh,

  • minus P. So, for example, if I were to do this, no.

  • And I would try to run the other red

  • is the second red is container with the same

  • ports on my laptop,

  • I would get an error saying port is already

  • allocated so I can do six thousand one and run it again.

  • I'll run it in detached mode so that I won't see port.

  • And if I go over here and say, yes,

  • I see that I have two different ready

  • versions running,

  • both of them bound to different ports on my

  • laptop and the containers themselves listening to request

  • on the same port.

  • So so far,

  • we have seen a couple of basic documents,

  • we have seen Doctor Pool,

  • which pulls the image from the repository

  • to local environment. We also saw run,

  • which basically combines pool and start,

  • pulls the image if it's not locally available

  • and then starts it right away. Then we saw a start and stop,

  • which makes it possible to restart the container

  • if you made some changes and you want to create

  • a new version,

  • which makes it possible to restart a container

  • if you need to.

  • We also saw Docker run with options.

  • The one option that we saw was D minus D,

  • which is detach so you can run the container detached mode

  • so you can use a terminal again,

  • minus P allows you to bind port of your host to the container.

  • So very important to remember minus P,

  • then comes the port of your host and then comes

  • the port of your container or whatever

  • it might be. We also saw the cops,

  • the cops minus a which basically gives you all

  • the containers no matter if they're running

  • currently or not. We also saw the images,

  • which gives you all the images that you have locally. So,

  • for example,

  • if after a couple of months you decide to clean up

  • your space and get rid of some stale images,

  • you can actually check them,

  • check the list and then go through them and delete them.

  • You can do the same with stale docker

  • containers that you don't use anymore

  • or you don't need any more. You can also get rid of them.

  • So the final part of the Docker basic commands

  • are commands for troubleshooting,

  • which are very, very useful.

  • If something goes wrong in the container,

  • you want to see the logs of the container

  • or you want to actually get inside of container,

  • get the terminal and execute some comments on it.

  • So let's see. Yes,

  • we have two containers running right now.

  • We don't have any output. We don't see any locks here.

  • So let's say something happens.

  • Your application cannot connect to redis and you don't

  • know what's happening. So ideally,

  • you would want to see what logs redis container is producing.

  • Right. The way to do that is very easy.

  • You just say docker logs and you specify the container

  • ID and you see the logs.

  • You can also do the logs if you don't want

  • to remember the container idea or to decrypt all the time,

  • you can remember the name of the container and you can

  • get the logs using the name.

  • So a little side note here as we're talking about

  • the names of the containers, so here it is. You see,

  • when a container is created,

  • you just get some random name like this so you can name your

  • containers as you want using another option

  • of the docker run,

  • which might be pretty useful sometimes if you don't want

  • to work with the container IDs and you just want

  • to remember the names or if you just want

  • to differentiate between the containers. So,

  • for example,

  • let's create a new container from readies

  • for that old image using a different name that we choose.

  • So I'm going to stop this container and I'm

  • going to create a new one from the same image.

  • So let's run it in the detached mode that's open the port.

  • I was in one, two,

  • six three seven nine and gave the name

  • to the container and let's call it since it's the older version,

  • let's call it redis older.

  • And we need to specify the image. So remember,

  • this will create a new container since we're running

  • the Docker Run comment again.

  • So if we execute this and check again,

  • we see the readies for the old image based

  • container is created, which is fresh, new,

  • you can see,

  • and it created in the name is already

  • older and we can do the same for

  • the other container so that we kind of know

  • which container is what. So to stop this one

  • and I will use the same comment here,

  • this will be the latest and I will call this latest

  • and since find another port.

  • So I'm going to run it and let's see.

  • So here I have two containers running now. I know.

  • Redis older ladies later. So, for example,

  • if the older version has some problems,

  • I can just do locks right this older and I can

  • get my locks.

  • So in other very useful command in debugging

  • is Docker exec.

  • So what we can do with Docker exec is we can actually get

  • the terminal of a running container.

  • So let's check again.

  • We have two containers running and let's say there

  • is some problem with the latest redis latest container.

  • And I want to get a terminal of the container and do it,

  • maybe navigate a directory inside,

  • check the log file or maybe check the configuration

  • file or print out the environmental

  • variables or whatever. So in order to do that,

  • we use Docker Exit Command with minus T, which stands

  • for Interactive Terminal.

  • Then I specify the container ID and I say. So I get the.

  • And here you see that the the cursor changed.

  • So I'm inside of the container as a user.

  • And here if I say this is empty,

  • I can also print out which directory I am.

  • I can go to the home directory, see what's there.

  • So I have my virtual file system inside

  • of a container and here I can navigate the different

  • directories and I can check stuff.

  • I can also print all the environmental

  • variables to see that something is set correctly

  • and do all kinds of stuff here.

  • And this could be really useful if you have a container

  • with some complex configuration or if,

  • for example,

  • you are running your own application that you wrote

  • in a container and you have some complex configuration

  • there or some kind of setup,

  • and you want to validate that everything is correctly

  • set in order to exit the terminal just to exit

  • and you're out.

  • You can also do the same using the name again

  • if you don't want to work with the IDs and you just

  • want to remember the names of the container

  • to make it easier,

  • you can do it with the name as well.

  • Same thing since most of the container images

  • are based on some lightweight Linux distributions,

  • you won't have much of the Linux commands

  • or applications installed here. For example,

  • you wouldn't have Kerl or some other stuff.

  • So you were a little bit more limited in that sense.

  • So you can execute a lot of stuff

  • from the docker containers for most of the debugging work.

  • Um, it should be actually enough.

  • So the final part to review the difference

  • between stock run in Docker Start,

  • which might be confusing for some people,

  • let's revisit them. So basically,

  • Docker Run is where you create a new container from an image.

  • So Docker Run will take an image with a specific

  • version or just latest.

  • Right as option or as an attribute

  • with Docker Start. You're not working with images,

  • but rather with containers. So for example, as we saw,

  • the Koran has a lot of options.

  • You specify with minus 30 and minus P, the port binding,

  • and then you have this name of the container

  • and all the stuff.

  • So basically you tell Docker at the beginning

  • what kind of container with what attributes

  • name and so on to create from a specific image.

  • But once the container is created and you can

  • see that using the command, for example, here,

  • the last one that we created,

  • and if you stop it and you want to restarted,

  • you just need to use the command to start and specify

  • the container ID.

  • And when you started the container will retain

  • all the attributes that we defined when creating

  • the container using Docker Run.

  • So Docker Run is to create a new container.

  • Docker Start is to restart a stopped container.

  • So once you've learned

  • the basic concepts

  • and understood how it works,

  • it's important to see how the is actually

  • used in practice.

  • So in software development workflow,

  • you will know you have these classical steps of development

  • and continuous delivery or continuous integration

  • and then eventually gets deployed on some environment

  • or it could be a test environment,

  • develop environment.

  • So it's important to see how Docteur actually

  • integrates in all those steps. So in the next couple of videos,

  • I'm going to concentrate exactly on that.

  • So we're going to see some overview of the flow

  • and then we're going to zoom in on different parts

  • and see how Dockray is actually used in those individual steps.

  • So let's consider

  • a simplified scenario where

  • you're developing a JavaScript

  • application on your laptop,

  • right on your local development environment,

  • your JavaScript application uses and mongered TB database,

  • and instead of installing it on your laptop,

  • you download a docker container from the Docker hub.

  • So you connect your JavaScript application with the Monga DB

  • and you start developing.

  • So now let's say you develop the application first version

  • of the application locally and now you want to test

  • it or you want to deploy it on the development

  • environment where a tester in your team is going to test

  • it so you can meet your JavaScript application in Ghiz

  • or in some other version control system that will trigger

  • a continuous integration.

  • Jenkins builds or whatever you have configured

  • and jenkins' build will produce artifacts from your application.

  • So first you will build your JavaScript application

  • and then create a docker image out of that JavaScript artifact.

  • Right. So what happens to this Docker image once

  • it gets created by Jenkins build?

  • It gets pushed to a private Tulka repository,

  • so usually in a company you would have a private

  • repository because you don't want

  • other people to have access to your images.

  • So you push it there.

  • And now is the next step could be configured on Jenkins'

  • or some other scripts or tools.

  • That Docker image has to be deployed

  • on a development server.

  • So you have a development server that pulls the image

  • from the private repository,

  • your JavaScript application image,

  • and then pulls the mongaby that your JavaScript

  • application depends on from a Docker hub.

  • And now you have two containers, one,

  • your custom container and a publicly available Mongar,

  • the B container running on dev server,

  • and they talk to each other. You have to configure it.

  • Of course,

  • they talk and communicate to each other and run as an app.

  • So now if Testor, for example,

  • or another developer logs in to a deaf server,

  • they be they will be able to test the application.

  • So this is a simplified workflow,

  • how DOCA will work in a real life development

  • process in the next few years.

  • I'm going to show you hands on demo of how to actually

  • do all of this in practice.

  • So in this video,

  • we are going to look at some practical example

  • of how to use Docker in a local development process.

  • So what are we going to do is simple demo

  • of a JavaScript A.I.S application

  • in the backend to simulate local development process.

  • And then we're going to connected to a docker

  • container with the TO database in it. So let's get started.

  • So in this video,

  • we're going to see how to work with docker containers

  • when developing applications.

  • So the first step will be is we going to develop

  • a very simple UI backend application using JavaScript,

  • very simple HTML structure and no JS in the backend.

  • And in order to integrate all of this in the database,

  • we are going to use a docker container of a monga to be

  • database and also to make working

  • with the mongered to be much easier so we don't have

  • to execute commands in the terminal.

  • We're going to deploy a docker container of a Monga UI,

  • which is called the Mongar Express,

  • where we can see the database structure and all

  • the updates that our application is making

  • in the database.

  • So this development setup

  • should give you an idea of how

  • docker containers are actually

  • used in the development process.

  • So I've already

  • prepared some very

  • simple JavaScript application, so in order to see the code,

  • basically we have this index HTML that is very simple

  • code and we have some JavaScript here and we're

  • using Noge inspection that just serves that index HTML file

  • and listens on page three thousand.

  • So we have the server running here in the backend and we have

  • the UI that looks like this.

  • So basically it's just the user profile page with some user

  • information and user can edit their

  • profiles or if I, for example,

  • change the name here and if I change the email

  • address and do changes like this,

  • I can save my profile and I have my updates here.

  • However, if I refresh the page,

  • of course the changes will be lost because

  • it's just JavaScript. No.

  • So there's no persistent component in this application.

  • So in order to have this,

  • which is actually how real life applications work,

  • you'll know that you need to integrate

  • the application with a database. So using that example,

  • I will try to showcase you how you can actually use

  • the docker containers to make the development process easier

  • by just pulling one of the databases

  • and attaching it or connecting it to the application.

  • So in this case,

  • we're going to go with the monga to be

  • application and in addition to Monga to be container,

  • we're going to also deploy a Monga DP UI,

  • which is its own container. It's called Mango Express,

  • where we can manage or see the database insights

  • and updates from our application much easier.

  • So now let's see how that all works.

  • So in order to get started,

  • let's go to Docker Hub and find our Mungindi image.

  • Here, let's go to Mongul. And we have to be here.

  • And the Mongar Express,

  • which is another container that we're going

  • to use for the UI. So first,

  • let's pull the monga to be official image.

  • So I already have

  • them going to be later,

  • so pulling doesn't take longer on my laptop,

  • but you're going to need a couple of seconds,

  • probably in the next one we're going to pull

  • is the Talk Express, which I also have, I believe.

  • So let's see. Yes, it's also first,

  • so if I check locally, I have to be and express images,

  • so the next step is to run both mango and mango express

  • containers in order to make the Mungindi database

  • available for application and also to connect

  • the Mongar express with the Mongo DB container.

  • So let's do that.

  • The connection between those two first.

  • In order to do that,

  • we have to understand another doctor concept doctor network,

  • so how it works is that it creates its

  • isolated doctor network.

  • Where the containers are running in.

  • So when I deploy two containers in the same

  • token network,

  • in this case Mango and Mango Express,

  • they can talk to each other using just the container

  • name without localhost port, no, etc.

  • just the container name because they're in the same network.

  • And the applications that run outside of Docker

  • like our No Jars,

  • which just runs from node server,

  • is going to connect to them from outside or from the host

  • using local host and the port. No.

  • So later when we package our application into its

  • own docker image,

  • what we're going to have is a game docking network

  • with Monga to be container Mongar express container.

  • And we're going to have a notice application which we wrote,

  • including the index,

  • HTML and JavaScript for Fronton in its own docker container.

  • And it's going to connect to the Monga DB in the browser,

  • which is running on the host.

  • But outside the Docker network is going to connect to our

  • JavaScript application again using hostname

  • and the port number.

  • So Docker by default already provides some networks.

  • So if we say Docker network,

  • unless we can already see this auto

  • generated Docker networks,

  • so we have four of them with different

  • names and the drivers,

  • we're not going to go into details here,

  • but what are we going to do is create its own network

  • for the mongered to be in the Mongo Express and we're

  • going to call it mobile network. So let's do this right away.

  • I'm going to say Docker Network create and we are going

  • to call it Mongul network.

  • So now if I do a network

  • else again,

  • I see my doctor network has been created.

  • So now in order to make our Mongar to be container

  • in the Mongar express container run in this Mongan network,

  • we have to provide this network option

  • when we run the container in the dock around.

  • So let's start with the Mongul,

  • so we all know that DOCA run is the come in to start

  • a container from an image, right? So we have to run Mongul,

  • which is the basic DOCA run comment. However,

  • in this case,

  • we want to specify a couple of things.

  • As you learn from the previous videos,

  • you have to specify something called port.

  • So we need to open a port of Mongar to be the default

  • port of what would it be is twenty seven

  • thousand seventeen.

  • So we'll take that port actually for both

  • host and container.

  • So Monga will run this port inside of a container

  • and we open the same port on the host.

  • So that will take care of the port.

  • Then we will run it in a detached mode.

  • In addition to that,

  • there are a couple of things that we can specify

  • when starting up the container.

  • And these are environmental variables of the Mongar to be.

  • Let's see,

  • in the official image description,

  • you actually have a couple of documentation

  • about how to use the image,

  • which is very helpful to kind of understand what kind

  • of configuration you can apply to it.

  • Here you see some environmental variables.

  • So basically on startup,

  • you can define what the root username and the password

  • will be,

  • which is very handy because we're going

  • to need those two for the Express to connect

  • to the monga.

  • And you can also specify the any database.

  • We're just going to provide the username and password

  • because we can create the database

  • from the Mongo Express UI later. So let's do that.

  • And the way you can specify the environmental

  • variables you can actually see here as well is by just.

  • Let's copy this one. So here you say environment,

  • a variable,

  • that's what the minus E flag stands

  • for and what username will say, I mean, and another variable,

  • which is the password will be just password.

  • So in this way,

  • we can actually overwrite what the default

  • username and password will be.

  • So two more things that we need to configure

  • in this command are container name because we're going

  • to need that container name to connect

  • with the Monga Express. So we'll call this one Mongo DB,

  • let's say.

  • And another one we need is the network that we created,

  • which was called Mongo Network.

  • So in order to make this command a little bit more

  • structured on multiple lines. So it's C.

  • So it's more readable.

  • So basically all these options or all these flags that we set

  • to go one more time through them,

  • it's going to start in detached mode.

  • We are opening the port on the host username

  • and password that we want to be to use in the startup process.

  • We're going to rewrite or overwrite the name

  • of the container and this container

  • is going to run in a Mongo network and this should actually

  • start the container.

  • OK, so if you want to see whether it was successful,

  • we can lock the container and see what's happening inside.

  • So as we see Mongar was started and everything

  • actually looks good,

  • waiting for connections on board.

  • Twenty seven thousand seventeen. OK,

  • so now let's start Mongar Express one Monga Express

  • to connect to the running Mahmoudi container on startup.

  • And here we have an example of how to run it.

  • And here we have a list of environmental

  • variables that we can configure. So let's quickly look at them.

  • Username, password. We don't need them. However,

  • we need the admin username and password of the monga

  • to be this is actually what we overwrote

  • with admin and password.

  • So we are going to use them because experts will need some

  • username password to authenticate with the monga

  • to be interconnected.

  • The port is by default the correct one.

  • So we don't need to change that. And this is an important part.

  • This is the going to be server, right. So basically,

  • this is the container name that experts will use

  • to connect to the docker.

  • And because they're running in the same network,

  • only because of that, this configuration will work.

  • If I didn't if I hadn't specified

  • the network,

  • then I could have I could specify the name

  • correct name here of the container,

  • but it wouldn't work. So with that said,

  • let's actually create the Docker Run

  • command for Xpress as well.

  • So let's clear the history and let's start. So again,

  • we run detached mode and let's see what parameters we need.

  • So first of all, port, let's say,

  • what is the default port that the Express runs on?

  • That's 80 81. So we'll take that.

  • So basically,

  • it's going to run on our laptop on Port 80, 81.

  • The next option would be. These two and remember,

  • environmental variables need to be specified with minus EHP,

  • and this is the username of Monga Debe admin,

  • which is admin,

  • because we specified it when we started the ongoing

  • going to be container. This is the password.

  • Let's set this one as well.

  • Don't forget the network minus minus net Wango Network.

  • We have the name.

  • We can also call it Mongul Express.

  • And let's see what else

  • we might need here. Yes, this is an important one.

  • And our container name, let's actually see it again, Dr.

  • Pierce, the one running,

  • it's called mongered to be that's the container

  • name and this is what we need to specify here.

  • So I'm going to write this here. And finally,

  • the image is called Mongar Express.

  • I'm just going to copy this one here.

  • And that is it's so basically with these commands,

  • Mongar Express should be able to connect to

  • the to be container. So that's right.

  • And just to make sure,

  • let's log the container and see what's happening there.

  • Waiting for Mongo DB Welcome to my Express.

  • It looks like it connected successfully.

  • It says here database connected and the Mongar Express

  • is available at Port eighty eighty one.

  • So let's check the Mongar Express

  • out at the port. Eighty, eighty one. So actually,

  • let's close this tabs. You don't need them anymore.

  • And here if I say localhost one,

  • I should be able to see the Monga Express.

  • So these are the databases that already exist by default

  • in Wango or which are created on startup and using the UI,

  • we can create our own database. As we saw previously.

  • We could have specified environmental variable.

  • You need to be on mongered to be stored up

  • and that would have created a new database. However,

  • it does matter.

  • We will just create a database name here.

  • So we will call it user account database.

  • So let's create one.

  • And now we can actually use it or connect

  • to this database from no JS. So let's see how that works.

  • So now we have the mongered

  • to be container in the Mongar

  • express container running,

  • so let's check that we have both of them.

  • We'll have to connect not just with the database.

  • So the way to do it is usually to give

  • a protocol of the database and the Ghauri

  • and the UI for the database would be localhost and the port

  • that it's accessible at.

  • I already went ahead and prepared the code

  • for Narges.

  • So basically we are going to use a Mongul client here,

  • which is a node module, and using that model client,

  • we are connecting to the Mungindi database.

  • So this is the protocol.

  • The host and the port that we just saw

  • that the Mogador be listening at and username and password

  • of the route user Mongar to be. Of course,

  • usually you wouldn't put the password here or not use

  • an admin or route username password to connect

  • to a database.

  • But this is just a demonstration purposes

  • and these are username and password that we set

  • as environmental variables when we created

  • the Docker Mongar container. So. They check that.

  • So this is the Mongkut to be, uh,

  • Container Command and this is the user name,

  • root and root password that we specified.

  • And this is what we are going to use in the code. As I said,

  • for demonstration purposes,

  • I will write the password directly here.

  • So then we connect to the database.

  • So I also went ahead and in the Mongo Express

  • user account database and inside that I created

  • a collection which is like a table in my

  • secret world called users.

  • So here I connect to a user account database and I query

  • the collection users and this is a card requests.

  • So I'm just fetching something from the database

  • and this is update request. Same thing.

  • I connect to the database using the same

  • you or I and the database name and I update or insert

  • something in the collection. So let's see how all that works.

  • So let's head over to the UI. So in the user's collection,

  • there is no data. It's empty.

  • So we're going to refresh it and edit the data.

  • So I'm going to write here some and updated and refresh.

  • We see that a new insert was made.

  • So this is the update profile section here.

  • So all this was executed. It connected to the Mungindi be.

  • And now we have one entry,

  • which is email coding name that we changed.

  • So if I'm going to refresh it now,

  • I fetched a newly inserted user data in the UI

  • and displayed it here. And also,

  • if you want to see what the monga,

  • the big container actually logs during this process,

  • we can actually look at the logs.

  • So I'm going to say yes and log using the container ID.

  • So let's say if I wanted to see just the last part of it,

  • because I want to see what the last activity was,

  • I can also let's clear this and I can also do

  • tail so I can just display the last part of it.

  • Or if I wanted, I could also stream the logs.

  • So I'll clear this again and I will say stream

  • the logs so I won't have to do the logs all the time.

  • So if I make a line here, for example,

  • to mark the lost logs, I can refresh it.

  • Let's make some other changes.

  • Let's change it and save profile.

  • So I'm going to see some activity here as well.

  • So these connections are new.

  • And he also says received collect metadata.

  • And this is where the node JS request comes

  • in with the notice and its version.

  • And at the end of each communication,

  • there is an end connection because we end

  • the database connection at the end.

  • So we see that also in the logs, so, for example,

  • something wasn't working properly,

  • you could always check them in the logs here.

  • So with that,

  • I have a fully functional Java application,

  • which has a persistance in the Mongered database,

  • and we also have Mongar UI,

  • both of them running in a Docker container.

  • So this would be some unrealistic example of how

  • local development using docker containers

  • would look like.

  • So in the last video

  • we created and started

  • to docker containers

  • among to and among Express,

  • and these are the comments that we used to make it happen,

  • right? The first we created a network where these two

  • containers can talk to each other using just the container

  • name and no host port, etc. is necessary for that.

  • And then we actually ran to dock, run,

  • commence with all the options and environmental variables,

  • etc. said no.

  • This way of starting containers all the time is a little bit

  • tedious and you don't want to execute this wrong

  • commands all the time on the mainline terminal,

  • especially if you have a bunch of docker containers to run.

  • You probably want to automated or just

  • make it a little bit easier.

  • And there is a tool that that makes running

  • multiple docker containers with all

  • this configuration much easier than with docker run commands.

  • And that is Docker Campo's.

  • If you already know Doc Campos and you were wondering

  • why is it useful and what it actually does,

  • then bear with me in the next slide.

  • I'm going to explain that.

  • So this is a doctor run command of the mongered

  • to be executed previously. So basically with Dr.

  • Campos file, what we can do is we can.

  • Take the whole comment with its configuration and map

  • it into a file so that we have a structured comment.

  • So if you have, let's say,

  • 10 containers that you want to run for your

  • application and they all need to talk to each other

  • and interact with each other,

  • you can basically write all the run commands for each

  • container in a structured way in the dock who can

  • post and we'll see how that structure

  • actually looks like. So on the right side,

  • in the Docker composed example,

  • the first two takes are always there, right?

  • Version three,

  • that's the latest version of the compose and compose.

  • And then we have the services.

  • This is where the container list goes.

  • So the first one is mongered to be and that MEPs

  • actually to the container name. Right.

  • This is going to be a part of container name

  • when Docker creates a container out

  • of this configuration blueprint.

  • The next one is actually the image, right,

  • so we need to know which image that container

  • is going to be built from. And of course,

  • you can specify a version take here next to the name.

  • The next one is port.

  • So we can also specify which ports is going to be open.

  • First one is on the host and the second one after

  • the column is on the container. So the port maybe is there.

  • And of course,

  • the environmental variables can be also

  • mapped in the dock or compose.

  • And this is how actually the structure of Docker

  • compose looks like, for one specific comments.

  • Let's actually see the second container come

  • in for Mongar Express that we executed

  • and how to map that. So now again,

  • we have a document command for Mongar Express

  • and let's see how we can make it into a dock to compose.

  • So, as I said,

  • services will list the containers that we want

  • to create. And again,

  • names Mongo Express will never map to the container name.

  • The next one will be the image again.

  • You can edtech here.

  • If you want to be have a specific one,

  • then you have the ports, 80 to eighty 80.

  • And then you have all the environmental

  • variables again. Under the environment,

  • and this is how the Ducker compose will look like,

  • so basically the composed,

  • it's just a structured way to contain very

  • normal common documents. And of course,

  • it's going to be easier for you to edit the file

  • if you want to change some variables or if you want

  • to change the ports.

  • If you want to add some new options to the run, come in,

  • so to say.

  • And maybe you already noticed the network configuration is not

  • there in the docker compose.

  • So this Mongo network that we created,

  • we don't have to do it in a docker compose.

  • We go to the next slide because we have the same

  • concept here.

  • We have containers that will talk to each other

  • using just the container name.

  • So what Docker Compose will do is actually take care

  • of creating a common network for these containers

  • so we don't have to create the network

  • and specify in which network these containers will run in.

  • And we're going to see that in action right away.

  • So let's actually

  • create a docker compose file,

  • so I'm going to paste all my contents here

  • and this is exactly what we saw on the slides

  • and I'm going to save it is a Mongo demo and we see

  • the highlighting as well.

  • Be very aware of the invitation they have to be correct.

  • So this is the least of all the containers

  • on the same level.

  • And then each container has its configuration inside that.

  • So now, compared to Docker run commands,

  • it will be very easy for me to go here and change

  • this environment variables or add some

  • new configuration options, et cetera.

  • So here again for demonstration,

  • we actually saved the Doctor Campos in the code.

  • So it's part of the application code.

  • So now that we have a doc who can post file,

  • the question is how do I use it or how do I start

  • the containers using that?

  • So let's go to the come in line and start

  • docker containers using this Docker Campos file.

  • So the way to use it is using Docker Campos. Comment now,

  • if you've installed Dakkar on your laptop,

  • it usually gets installed with the TOKU

  • compose packaged inside,

  • so you should have both darker and darker compose

  • commands installed as a package.

  • So Dogra composed command takes an argument,

  • which is the file.

  • So I'm going to specify which file I want to execute.

  • And in my case, it's called Mongo Yamal.

  • And at the end I want to say what I want to do

  • with this file. In this case, the command is up,

  • which will start all the containers

  • which are in the Mongul Yamal.

  • So let's actually check before that.

  • There are no containers for money.

  • So I don't have anything running here and I'm going

  • to start those two containers.

  • OK, so there are a couple of interesting things

  • here in this output, so let's crawl all the way up.

  • So we've talked about Dr Network and how we created our own

  • network at the beginning to run the containers inside.

  • And I said the Dr. Campos takes care of it.

  • And here we see the output where he actually created

  • a network called My App Default.

  • This is the name of the network and it's going

  • to run those two containers.

  • These are actually the names of the containers

  • that compose created.

  • This is what we specified and it just

  • added prefix and suffix to it.

  • And he created those two containers in that network.

  • So if I actually go here and do doctor network,

  • as I see the my default is here.

  • So that's one important thing.

  • And the other one is the logs of those containers

  • actually mixed because we are starting

  • both at the same time. As you see,

  • the Mongar Express has to wait for Mongar DB

  • to start because it needs to establish a connection.

  • So we here see the logs. Some are going to be starting.

  • We still get Conexion refuse because it's not started

  • completely and some were here when mongered the be is started

  • and listening for connections,

  • Mongar Express is able to connect to it.

  • So this is something that you can also do with Dr.

  • Campos when you have two containers that where one

  • depends on another one starting,

  • you can actually configure this waiting

  • logic in the composed. OK,

  • so now let's see actually that the docker

  • containers are running. So we have both of them here.

  • You see the container names that Dr. Campos gave them.

  • And one thing here to note is that the Mongar Express

  • actually started on Port Eddie. Eighty one inside the container.

  • So we can see that here.

  • So we are opening a port on my laptop that actually

  • forwards the request to container at port

  • eighty eighty one just so that you don't get confused

  • because was it on the slides.

  • So now that we have restarted the containers,

  • let's actually check the first one,

  • which is Mongar Express.

  • So it's running on 80, 80 in the previous example,

  • we created a database in the collection,

  • which is gone because we restart the container.

  • This is actually another very important concept

  • of containers to understand when you restart the container,

  • everything that you configured in that containers

  • application is gone. So data is lost.

  • So to say there is no data persistance

  • in the containers itself. Of course,

  • that is very inconvenient.

  • You want to have some persistence,

  • especially when you're working with the database.

  • And there is a concept where you're going to learn later

  • called volumes that makes it possible to have persistency

  • between the container restarts. OK,

  • so let's actually create the database again

  • because we need it.

  • And inside the database we had actually

  • users collection.

  • Let's create that one as well and that is empty.

  • Now let's actually start our application.

  • And there you go.

  • So now if I were to modify this one here and update,

  • I should see the updated entry here.

  • So the connectivity with Mongar to be works.

  • So now what do I do it if I want to stop those containers,

  • of course, I could go there and say,

  • Doctor, stop and I can provide all the IDs as we did

  • previously or with Dr. Campos. It's actually easier I can do.

  • Dr. Campos again.

  • Specify the file and instead of up,

  • I'm going to say down and that will go through all

  • the containers and shut them all.

  • And in addition to removing the containers or stopping

  • them removing the containers, it also removes the network.

  • So the next time we restarted, it's going to recreate,

  • so let's actually check that the network

  • allows that default. My default network is gone.

  • And when I do up.

  • See, it gets recreated,

  • that should give you a good idea of what Dr.

  • composes and how to use it. The next,

  • we're going to build our own Docker image from our

  • no JavaScript application.

  • So now let's consider

  • the following scenario,

  • you have developed an application feature,

  • you have tested it,

  • and now you're ready to deploy it right to deployed,

  • your application should be packaged into its

  • own docker container.

  • So this means that we are going to build in Docker image

  • from our JavaScript Noge spec in application

  • and prepare it to be deployed on some environment to review

  • this diagram that we saw at the beginning of the tutorial.

  • So we have developed a JavaScript application.

  • We have used the Monga be docker container to use it,

  • and now it's time to commit it to the right. In this case,

  • we're going to simulate these steps on the local environment.

  • But still,

  • I'm going to show you how these steps actually work.

  • So after you have a continuous integration that runs.

  • So the question is,

  • what does actually Jenkins' do with this application

  • when it builds the application so that JavaScript

  • application using the NPM build, etc.,

  • it packages it then in a Docker image.

  • And then pushes it into a repository,

  • so we're going to actually simulate what Jenkins does

  • with their application and how it actually

  • packages it into a Docker image on the local environment.

  • So I'm going to do all this on my laptop,

  • but it's basically the same thing that Jenkins will do.

  • And then on later said we can actually push the built

  • image into a docker repository.

  • In order to build a docker

  • image from an application,

  • we basically have to copy the contents of that application

  • into the Docker file.

  • It could be an artifact that we built in our case.

  • We just have three files so we can copy them

  • directly in the image and we're going to configure it.

  • In order to do that,

  • we're going to use a blueprint for building images,

  • which is called a docker file.

  • So let's actually see what is a docker file and how

  • it actually looks like. So as I mentioned,

  • Docker file is a blueprint for creating Docker images.

  • A Syntex of Docker file is super simple.

  • So the first line of every docker file is from image.

  • So whatever image you are building,

  • you always want to base it on another image.

  • In our case,

  • we have a JavaScript application with no JS backend,

  • so we are going to need a node inside of our

  • container so that it can run our node application instead

  • of basing it on a Linux Alpine or some other lower level image,

  • because then we would have to install node ourselves on it.

  • So we are taking a ready node image.

  • And in order to see that, let's actually go to Docker Hub.

  • And research note here and here,

  • you see there is a ready node image that we can

  • base our own image from.

  • So here we have a lot of different text so we can

  • actually use one specific one or we can just go

  • with the latest if we don't specify any take.

  • So what that actually means basing our own image

  • on a node image is that we are going to have

  • node installed inside of our image.

  • So when we start a container and we actually

  • get a terminal of the container,

  • we can see that node command is available

  • because there is node installed there.

  • This is what from Node actually gives us.

  • So the next one is we can configure

  • environmental variables inside or docker file now is you know,

  • we have already done this in the using the Docker

  • anchorman's or the docker compose.

  • So this will be just an alternative

  • to defining environmental variables in a docker composed,

  • for example,

  • I would say it's better to define

  • the environmental variables externally

  • in a Docker compose file because if something changes,

  • you can actually override it.

  • You can change the Docker Compose File Incorporated

  • instead of rebuilding the image. But this is an option.

  • So this in command basically would translate to setting

  • the environmental variables inside of the image environment.

  • The next one is run.

  • So all this capital case words that you see

  • from N and Run,

  • they're basically part of a Syntex of a docker file.

  • So using run,

  • basically you can execute any kind of Linux commands.

  • So you see make directory is a Linux command

  • that creates a home slash home slash app directory.

  • Very important to note here,

  • this director is going to live inside of the container.

  • So why not start a container from this image?

  • Slash home slash EP directory will be created inside

  • of the container and not on my laptop,

  • not on the host.

  • So all this commands that you have in Docker file

  • will apply to the container environment.

  • None of them will be affecting my host's

  • environment or my laptop environment.

  • So we run basically you can execute any

  • Linux commands that you want.

  • So that's probably one of the most used ones.

  • And we also have a copy command. Now,

  • you would probably ask I can execute a copy comm

  • in a Linux copy command using run. Yes,

  • you could. But the difference here is that,

  • as I said, all these commands end run,

  • for example.

  • They apply to they get executed inside

  • of the container.

  • The copy command that you see here,

  • it actually executes on the host.

  • And you see the first parameter is DOT in the second

  • parameter is from scholarship. So source and the target.

  • So I can copy files that I have on my host

  • inside of that container image. Because if I were to execute,

  • run Sepi source this nation,

  • that command would execute inside of the DOKO container,

  • but I have the Farzat I want to copy on my

  • host in the last one.

  • So from an seemed or command is always part of the aircraft.

  • What command does is basically executes an entry

  • point Linux command.

  • So this line with the command actually

  • translates to node Sergius.

  • So remember here we actually do node savages,

  • so we start a node server with node. Yes,

  • this is exactly what it does, but inside of the container.

  • So once we copy our jazz and other files

  • inside of a container,

  • we can then execute node searches and we are able to do

  • it because we are basing on the node image

  • that already has node preinstalled and we are going

  • to see all of this in action. So another question here.

  • What is the difference between run ins?

  • Because I could also say run node server,

  • which is the difference again, is that CMB is an entry point.

  • Come in so you can have multiple run

  • commands with Linux commands, but ACMD just one.

  • And that marks for Docker file that

  • this is the command that you want to execute

  • as an entry point.

  • So that basically runs the server and that's it.

  • So now let's actually

  • create the Docker file,

  • and just like the Toku Campos file,

  • Docker file is part of the application code.

  • So I'm going to create a new file here and I'm going to.

  • Paste here, the contents. So, again,

  • we are basing of node image and actually instead

  • of just having the latest node,

  • I'm going to specify in node version.

  • So I'm going to take 13 minus Alpine.

  • So all this that you see here are text so I can

  • use any of them as a tick.

  • So I'm going to say 13 minus Alpine.

  • Like this,

  • so this is going to be a specific node image that I'm

  • going to use as my base image,

  • let's actually stop here for a moment and take a little

  • bit of a deep dove on this line.

  • So since we saw that dogor file is a blueprint

  • for any Docker image,

  • that should actually mean that every Docker image

  • that there is on Docker Hub should be

  • built on its own Docker file. Right.

  • So if we actually go to let's actually look at

  • one of the latest versions, which is 13 minus Alpine.

  • Let's click inside. And as you see,

  • this specific image has its own Docker file.

  • And here, as you see,

  • we have the same from that we just saw.

  • And this is what this node official image is based off,

  • which is a base image, Alpine, three point ten. Right.

  • And then we have this environmental variable

  • set and all this Lenos commands using run and some

  • other environmental variable. And you have this entry point,

  • which is a script.

  • So you can also execute the whole Shell script

  • instead of separate commands. And you have this final comment.

  • Right. So you don't have to understand any of this.

  • I just want to demonstrate that every image

  • is based off another base image. Right.

  • So in order to actually visually comprehend how these

  • layers stacking works with images,

  • let's consider this simplified visualization.

  • So our own image that we're building up with the version

  • 1.0 is going to be based on a node image

  • with a specific version.

  • That's why we're going to specify from Node 13

  • Alpine in the node 13 alpine image,

  • as we saw in the Docker file,

  • is based on Alpine based image with a version three point ten.

  • That's why it specifies from Alpine three point ten.

  • So Alpine is lightweight based image

  • that we install node on top of it.

  • And then we stole our own application on top of it.

  • And basically this is how all the images are built.

  • So now let's go back and complete our Docker file.

  • So we have the from specified.

  • We have the environmental variables specified.

  • And in just a second,

  • we're going to actually see these commands in action.

  • So let's copy that. And this is also very important.

  • Docker file has to be called exactly like that.

  • You can just give it any name. It is always called Docker File,

  • starting with a capital D, and that's it.

  • It's a simple text file, so just save it like this.

  • And here you can see the highlighting

  • and this docker icon.

  • So now that we have

  • a profile ready,

  • let's see how to actually use it.

  • So how do we build an image out of it?

  • So in order to build an image using Docker file,

  • we have to provide two parameters.

  • One is we want to give our image a name in the take,

  • just like all the other images have.

  • So we are going to do it using minus T, so we are going

  • to call our image my app and we're going to give

  • it a take of 1.0.

  • The tech could be anything you can even

  • call it actually version one. It would matter.

  • So we are going to do one point.

  • Zero in the second required parameter,

  • actually is a location of a Docker file,

  • because we want to tell Docker here,

  • build an image using this Docker file.

  • And in this case,

  • because we're in the same folder as the Docker file,

  • we're just going to say current directory.

  • When we execute this,

  • we're going to see that image is built.

  • And this is an idea of the image that was built.

  • Because I already have

  • a. 13 Alpine on my laptop,

  • this just used the the one I have lying

  • around locally for you, if it's the first time,

  • you will actually see that it's pulling node

  • image from the Docker hub. So now with the Dockray images,

  • I can actually see that my images here,

  • it says created two days ago. I don't know why. But anyways,

  • so I have the image name, which is this one here,

  • and I have the name of the image and the take of the image.

  • So if we go back to this diagram that we saw in the review,

  • so basically we've gone all these steps or we have

  • simulated some of the steps.

  • We've built the Joska application

  • using a docker containers.

  • And once the application is ready,

  • let's say we made the commit and we're we just simulated

  • what jenkins' server also does.

  • So what Jenkins does is actually it takes the docker

  • file that we create.

  • So we have to commit the Docker file

  • into the repository with the code.

  • And Jenkins will then build a Docker image

  • based on the Docker file.

  • And what is an important point here is that usually

  • you don't develop lone, you are in the team,

  • so other people might want to have access to that.

  • Up to image of your application that you developed,

  • it could be a tester maybe who wants to pull

  • that image and tested locally,

  • or you want that image to be deployed

  • on a development server. Right. In order to do that,

  • you have to actually share the image so it is pushed

  • into a docker repository. And from there,

  • either people can take it, for example, a tester,

  • maybe want to download the image from there and test it locally,

  • or a development server can actually pull it from their.

  • So let's actually

  • just run a container.

  • I'm just going to say run the image name obviously,

  • and tick like this. And in this case,

  • I'm not going to specify any other options

  • because we just want to see what's going on inside

  • of the container. So I'm just going to run it. OK,

  • so the problem is that it can find the Sergius file,

  • which is actually logical because we're not telling

  • it to look in the correct directory.

  • So since we're copying all the resources in this directory,

  • service is going to be there as well.

  • And this is another topic.

  • Whenever you adjust a docker file,

  • you have to rebuild an image because the old

  • image cannot be overwritten, so to say.

  • So what I'm going to do now is actually I'm going to delete

  • the one that I built.

  • So I'm going to I'm going to actually take the image.

  • This is how you delete an image.

  • But I can delete

  • it because as it says,

  • the docker is used by a stopped container.

  • So if I do Docker s minus a actually let's

  • go up to my app like this,

  • I have to first delete the container.

  • So this is how you delete a container.

  • It's Docker are M and 170 deleted the container.

  • I can delete an image.

  • So the image deletion is RMI like this.

  • So if I do images now I see my image isn't there. OK,

  • so we've modified the Docker file,

  • so let's rebuild it now. So Docker build.

  • OK.

  • And let's see the images here. So let's start it again,

  • so it's my one point zero

  • and let's run

  • it and see the problem is fixed

  • up listening on PT.. Three thousand.

  • So our app is running. So this one here, my EP 1.0,

  • first of all,

  • we can see the logs here like this.

  • We see that the EP is listening on three thousand.

  • We know everything is cool to actually just

  • get a little bit more inside. Let's enter the containers.

  • Let's get the terminal,

  • the command line terminal of the container

  • and look around there.

  • So I'm going to say Docker Exec Interactive Terminal.

  • I'm going to specify the container ID and the mesh

  • like this.

  • And since being bashed doesn't work,

  • we can actually try Shell.

  • So this is something you will also encounter

  • because some containers to not have Besch installed.

  • So you have to connect using bean s h. So one of them

  • has to work always.

  • So let's see in which rectory we are.

  • So we are in the root directory and we see our

  • virtual file system there. And as you see,

  • the cursor changes as well.

  • So that means we're inside of a container.

  • So now let's actually check some of this stuff.

  • So first of all,

  • we specified some environmental variables

  • here in the Docker file.

  • And this means that these environmental variables

  • have to be set inside the Docker environment.

  • So if we do and we actually see the mongered to be user

  • name on here and I'm going to be password are set

  • and there are some other environmental

  • variables automatically said we don't care about that.

  • So another thing we can check is this directory

  • because remember, because with this line,

  • we actually created this from a directory.

  • So let's see slash home slash app.

  • And as you can see,

  • the directory was created and with the next line,

  • we copied everything in the current folder.

  • So if we actually go and see Review in Finder.

  • So this is where the doctor father resides.

  • So basically copied everything that is inside

  • of this directory,

  • so all of this into the container. Now,

  • we don't actually need to have Dr. File and Dr.

  • Campos and this other stuff in here,

  • because the only thing we need are the JavaScript

  • files or if we build the JavaScript

  • application artifact, just the artifact.

  • So let's go ahead and improve there.

  • So what I'm going to do is I'm going to create an app

  • directory and I'm going to copy just the files

  • that I'm going to need for starting an application

  • instead of a container. So I'm going to take those.

  • And the images as well,

  • so all these are just external ones,

  • we don't need them there and images,

  • the index extremophile packages and surges and no

  • modules are inside of it.

  • So what we can do it now is instead of copying

  • the whole directory where with the Docker files,

  • I just want to copy all the contents of EBP folder.

  • So I'm going to do is I'm going to say copy

  • all the contents. And again,

  • because we modify it a docker file,

  • we need to recreate the image in order to leave

  • the Docker container terminal. You can actually exit.

  • So now we are on the host again. So if I do docker images again,

  • I have to first delete the container and then image.

  • But in order to delete the container,

  • I have to first stop it.

  • So now I can remove the container and now I can

  • actually remove the image that the container was based on.

  • And let's check again.

  • So let's actually execute that build.

  • Come in again.

  • So now that we have

  • the image built, let's actually run it.

  • So I'm going to say my app 1.0 and of course,

  • I could have executed with a minus D in a detached mode.

  • It doesn't matter now. And if I do it or yes,

  • I see my image continue running and all that's

  • actually enter the container again. So, team.

  • And as we learned, it was in age and again,

  • we're going to see the home EP and here we just have

  • the contents of EP directory,

  • so no unnecessary doctor file look recomposed etc files,

  • which is actually how it's supposed to be.

  • Or as I said,

  • because I just had a couple of files here,

  • I copied all of them.

  • But usually if have this huge application

  • you would want to compress them and package them

  • into an artifact and then copy that artifact

  • into a docker image container. OK, but as I said,

  • this was just for demonstration purposes,

  • because I just wanted to show you how you can

  • actually started as a container and actually look inside.

  • And in this case, we improved a couple of things.

  • But usually we would start this container

  • from a docker composed as well,

  • together with all the other docker images

  • that the application uses.

  • And it's also doesn't have any ports open.

  • So this is just for demonstration purposes.

  • So this video,

  • we're going to create a private repository

  • for darker images on A.W.

  • s.E.C Are there many more options for DOCA registries,

  • among them, Nix's and Digital Ocean,

  • so we can see how to create a registry there,

  • build and tag an image so that we can push

  • them into that repository.

  • And in order to push the images into a private repository,

  • you first have to log in to that repository.

  • So let's see how it all works.

  • So the first step

  • is to actually create a private

  • repository for Docker.

  • It's also called Docker Registry. In this case,

  • we're going to do it on. Yes. So let's see.

  • So I already have

  • an account on FWC,

  • so the service that we're going to use is called a plastic

  • container registry.

  • So you see our local container registry.

  • And because I don't have a repository there, yes,

  • I am presenting with the screen,

  • so in order to create a repository.

  • Click on Get Started.

  • And here we have a repository name and we're

  • actually going to name it the name of the application

  • that we have.

  • So I'm actually going to name it my app.

  • This is the domain of the registry from. Yes.

  • And this is the repository name,

  • which is the same as my image name.

  • And don't worry about the other stuff right now

  • and just create a repository. It's as simple as that. Now,

  • one thing I think specific to Amazon Container Service

  • is that here you create a docker repository per image.

  • So you don't have a repository we have where you can actually

  • push multiple images of different applications,

  • but rather for each image you have its own

  • repository and you go inside of the repository here.

  • It's empty now,

  • but what you store in a repository

  • are the different tags or different

  • versions of the same image.

  • So this is how the Amazon Container Service

  • actually works.

  • There are other Docker registries

  • that work differently. For example,

  • we create a repository and you can just throw all

  • of your container images inside of that one repository.

  • So I think this is more or less specific for us.

  • So anyways,

  • we have a repository which is called my EBP,

  • and let's actually see how we can push the image

  • that we have locally.

  • So actually check that once more.

  • So we want to push this image here into that repository.

  • So how do we do that? If you click on this one,

  • the view push comments will be highlighted.

  • This is different for each registry.

  • But basically what you need to do in order

  • to push an image into a repository

  • are two things. One,

  • you have to login into the private

  • repository because you have to authenticate yourself.

  • So if you're pushing from your local laptop

  • or local environment,

  • you have to tell that private repository, hey,

  • I have access to it. This is my credentials.

  • If Docker image is built and pushed from a jenkins' server,

  • then you have to give jenkins' credentials to login

  • into the repository.

  • So Tocal login is always the first step

  • that you need to do. So here. A.W.

  • is actually provides a docker login come in for it years

  • so it doesn't say docker login,

  • but in the background it uses one.

  • So I'm going

  • to execute the slogan come

  • in for us DOCA repository,

  • so in the background it uses actually Docker

  • login to authenticate.

  • So in order to be able to execute that,

  • you need to have a command line interface

  • in the credentials configured for it.

  • So if you don't,

  • I'm going to put a link to the guide of how to do

  • that in the description.

  • I have configured both of them so I can execute this command

  • and I should be logged in successfully

  • to the Docker repository.

  • So now I have authenticated myself

  • to the Docker repository here.

  • So I'm able to push the image that I have

  • locally to that repository. But before I do that,

  • it is one step I need to do. So I've already built my image,

  • so that's fine. And now I have to take my image.

  • And if this come in here,

  • looks a little bit too complicated

  • for you or too strange,

  • let's actually go and look at images NameA

  • concepts in Docker repositories.

  • So this is the naming

  • in doctor registries. This is how it works,

  • the first part of the image name,

  • the image full name is the registry domain.

  • So there is the host port, et cetera,

  • slash repository or image name and the tag. Now,

  • you may be wondering,

  • every time we were pulling an image out of Docker Hub,

  • we actually never had this complex

  • long name of the image. Right.

  • So when we were pulling an image,

  • it looked like this Dr.

  • Paul Mongul for pointing to the thing is with Docker Hub,

  • we're actually able to pull an image

  • with a shorthand without having to specify a registry domain.

  • But this command here is actually a shorthand

  • for this command.

  • What actually gets executed in the background

  • when we say Dr. Paul Mongul is Dr. Paul,

  • the registry domain. So Dr.

  • IOPS library is a registry domain.

  • Then you have the image name and then you have the tag.

  • So because we were working with Docker Hub,

  • we were able to use a shortcut,

  • so to say in a private registries,

  • we can just skip that part because there's no

  • default configuration for it. So in our case in us,

  • you see are what were you going to do?

  • Is we going to execute Docker,

  • pull the full registry domain of the repository?

  • This is what we're going to see here and a take

  • and this is how it just generates the Docker

  • registry name.

  • That's why we see this long image name with the tag here.

  • And we have to take our image like this.

  • So let's go back and take a look at our images,

  • our image that we built again in under the repository.

  • It says my app. Now,

  • the problem is we can just push an image with this name

  • because when we say Docker, push my app like this,

  • Docker would know to which repository we're

  • trying to push it by default.

  • It will actually assume we're trying to push to Docker Hub,

  • but it's not going to work, obviously,

  • because we want to push it to AWEX.

  • So in order to tell Docker, you know what,

  • I want this image to be pushed to a repository

  • with the name my app, we have to take the image.

  • So we have to include that information in the name

  • of the image.

  • And that is why we have to take the image tag

  • basically means that we are renaming our image

  • to include the repository domain or the address

  • in the name. OK, and A.W. has already gives us.

  • The come in that we can execute,

  • we want to use the specific version,

  • so I'm going to use 1.0 and both.

  • So what this is going to do is it's going to rename

  • this is what tech does my app 1.0.

  • This is what we have locally.

  • This is what the name is to this one here.

  • So let's execute that and let's see what the outcome is.

  • And as you see,

  • it took the image that we had made a copy

  • and renamed it into this one.

  • So these two are identical images.

  • They're just called in a different way.

  • And now when we go back, we see the doctor push come in.

  • So basically,

  • this thing here is the same as Docker push and name

  • of the image and the take.

  • So this push command will tell Docker,

  • you know what, I want to take the image.

  • We take 1.0 and push it into a repository

  • at this address. So when I execute this command,

  • see the push command will actually push those layers

  • of the docker image one by one.

  • This is the same thing as when we are pulling it.

  • We also pulled the images layer by layer

  • and this is what happens in the reverse

  • direction when we push it.

  • So this is also going to take a little bit.

  • so the push come in was complete and we should be able to see

  • that image in the repository now.

  • So if I go inside. See,

  • I have image tag with one point zero.

  • This is our tag here and. Pushed a time to digest,

  • which is the unique hash of that image and the image

  • your eye. Which is, again,

  • the name of the image using the the repository address,

  • image, name or repository name in this case and the tech.

  • So now let's say I made some

  • changes in the Docker file, you know,

  • let's say I renamed this home from Slash to know that.

  • Like this,

  • or what could also lead to needs to recreate

  • an image is obviously when I change

  • something in the code red. So, you know,

  • let's say I were to delete this line because I don't want

  • to console log to me in my code.

  • And now I have a different

  • version

  • of the application where I have

  • changes in the application.

  • So now I want to have those changes in the new Docker image.

  • So now let's build a new Docker image out of it.

  • So Docker built let's call it my app

  • with a version one point one and a path to a doctor file.

  • And I have a second image,

  • which is called my EP with version one point one.

  • So now, again,

  • because I want to push this to a repository,

  • I have to rename it to include the repository

  • address inside of it. So I'm going to do Dr. Teg.

  • The first parameter is the image that I want to rename,

  • and the second one is the name of that image, a new name.

  • So it's going to be the same as the previous one

  • because the repository name and the address is the same.

  • Remember, we have one repository for one image,

  • but for different versions.

  • So we're building a version one point one,

  • so it should end up in the same repository.

  • So now here we have one point one.

  • And if I take that and images, I have a second image here.

  • So I'm going to copy that and I'm going to do Dr.

  • Built.

  • And do not forget the tech,

  • it's important because it's part of the complete name, sorry,

  • it's Toker Push.

  • And now some

  • of the leaders that I already

  • pushed are they're only the ones

  • that changed are being pushed,

  • so to say and also know that I just have to do Dr.

  • Log-in once at the beginning and then I can.

  • Pull and push images from this repository as many

  • times as I once saw, Dr. Logan is done once.

  • So now that is complete. Let's actually reload this.

  • So my repository now has two versions.

  • So this is pretty practical. If you are, for example,

  • testing with different versions and you want to have

  • a history of those image tags, if you want to, for example,

  • test a previous version. And I think in A.W.

  • as the repository repository has a capacity

  • of holding up to 1000 image versions. So, for example,

  • my app here,

  • you can have a thousand different tags

  • of the same image. OK, so now again,

  • to compare it to the initial diagram that we saw

  • for this complete flow,

  • that's actually switched back to it quickly.

  • So here what we did is basically simulate how

  • Jenkins would push an image to a Docker repository.

  • So whatever we did on our laptop will be

  • the same commands, execute it on a doctor,

  • on the Jenkins server. And again,

  • Jenkins user or Jenkins server user has to have

  • credentials to the Docker repository to execute

  • Docker login. Depending on the registry,

  • a repository configuration will look different.

  • And Jenkins and its tag the image and then push

  • it to the repository. And this is how it's done.

  • And the next step, of course,

  • we need to use that image that is lying now

  • in the repository and we going to see how

  • it's pulled from that repository. And again,

  • we're going to do it on the local environment.

  • But it's the same thing.

  • That's a development server or any other

  • environment will actually execute.

  • So in this video,

  • we're going to see how to deploy an application

  • that we built into a Docker image,

  • so after you package your application

  • in a docker image and save it in the private repository,

  • you need to somehow deployed on a development

  • server or integration server or whatever other environment.

  • And we're going to use Docker Campos to deploy

  • that application.

  • So let's imagine we have logged into a development server

  • and we want to run our image that we just

  • pushed the repository.

  • So our my image and the mongered to be image,

  • both the database and the Mongar express

  • on the development server so that my app image will be

  • pulled from private repository of the in the to go

  • containers will be pulled from the Docker hub.

  • So let's see actually how that would work.

  • So usually again,

  • you have developed your application,

  • you're done with it and you have created your own docker image.

  • Right now,

  • in order to start an application on development server,

  • you would need all the containers that make up

  • that application environment.

  • So we have to be in Mongar Express already.

  • So what we are going to do is here we are going to add

  • a new container in the list,

  • which is going to be our own image.

  • So let's go ahead and copy the image from our repository.

  • So let's actually use the 1.0.

  • So, again,

  • remember we said that this image name

  • is a shortcut for having a doctor thought I ordered

  • thought library slash Mongul with like a specific version.

  • So instead of that,

  • because we are pulling these images from a Docker hub,

  • we can actually skip that repository domain

  • in front of the images. But here,

  • because we are pulling it from a private repository.

  • So if we were to specify our image like this,

  • Docker will think that our image resides on Docker Hub.

  • So we try to pull it from Docker Hub.

  • And of course,

  • it won't find it because we have to tell Docker,

  • go and look at this repository with this repost

  • store name in this take. And of course,

  • in order to be able to pull this image or the docker

  • composed to be able to pull this image,

  • the environment,

  • where you executing this docker compost

  • file has to be logged into a document repository.

  • So here is the development server has to pull the image

  • from the repository.

  • What we would need to do on the development server

  • is actually do a log in before we execute the docker compose.

  • And obviously,

  • you don't need a doctor log in for backup.

  • Those Mongar images will be pulled freely. OK,

  • so the next thing that we have to configure

  • are the ports because obviously want to open the ports.

  • If we go back,

  • we see that our application runs on Port three thousand,

  • so the port of the container or where the container

  • is listening on is three thousand.

  • And here we can open the port on the host machine.

  • So it's going to be three thousand to three thousand.

  • We have actually the environment variables

  • inside of the Docker file,

  • but obviously we could have configured them in the dock

  • or compose just like this. So it's an alternative.

  • So this will be a complete docker compose file

  • that will be used on a development server

  • to deploy all the all the applications inside. So,

  • again, if we're trying to simulate a development

  • server that the first step will be to do the doctor log in.

  • In this case,

  • you have this on command for logging into the repository,

  • which I have done already in this terminal.

  • So the next step is to have the Docker compose file

  • available on this development server because we have

  • to execute the Docker compose file because we're

  • simulating here.

  • The way I would do it is I'm going to create

  • a Mongo file in the current directory where I am.

  • I'm going to copy this.

  • And safe,

  • so now I have my mango yaml file and now we can start

  • all three containers using Dr. Campo's comment and it's F up.

  • And here we see that it started

  • on Three Thousand and Monga, to be an expert,

  • started as well.

  • So let's check the game now.

  • And here we saw the database

  • is lost every time we recreate

  • a container and of course,

  • that's some good and we're going to learn how to preserve

  • the database data when the container

  • restarts using docker volumes in the later tutorials,

  • because this is not an ideal state. OK,

  • so now that we have a database and a collection,

  • let's actually refresh in our application works as well.

  • Let's check.

  • Awesome. So our application works,

  • let's refresh this one as well,

  • and there is actually one thing that I needed

  • to change in the code to connect not just with Mongo DB.

  • So let's actually go and look at that.

  • These are my handlers, you know,

  • just where I connect to the Mongaby database

  • so that your eyes are the same.

  • And what I changed here is that it was a localhost before,

  • so instead of localhost,

  • I changed it to Monga DB because this actually

  • is a name of the container or of the service

  • that we specify here.

  • So this actually leads back to the doctor network in how Dr.

  • Campos takes care of it.

  • Is that in the your eye or when I connect one

  • application in a container with another

  • one in another container,

  • I don't have to use this localhost anymore.

  • Actually, I wouldn't even need to use the port even

  • because I have all that information.

  • So the hostname and the port number in that configuration.

  • So my application will be able to connect Monga to be

  • using the service name. And because of that,

  • you don't have to specify here a local

  • host and the port number,

  • which is actually even more advantaged when you consider

  • using docker containers to run all of your applications

  • because it makes the connectivity

  • between them even more easier.

  • And that actually

  • concludes the this diagram

  • that we saw previously,

  • we have gone through all of the steps where we saw how

  • to develop JavaScript application locally

  • with docker containers.

  • Then we saw how to build them into an image just

  • like a continuous integration build will do it.

  • Then we pushed it into a private repository

  • and we simulated a development server

  • where we pulled the image from private repository.

  • In the other images from the Docker Hop,

  • where we started the whole application setup with our own

  • application in the two Mongul applications

  • using a Docker campus,

  • which is how you would deploy an application on a Web

  • server so that no testers or other developers will be

  • able to access the development server

  • and actually try out the application

  • that you just deployed.

  • Or you can also use it for DIMOS.

  • So in this video, we're going to learn about Dr.

  • Volumes in a nutshell, Dr. Volumes are used for data,

  • persistence in doctor. So, for example,

  • if you have databases or other stateful applications,

  • you would want to use Dr. Volumes for that.

  • So what are the specific use cases when you need

  • doctor volumes?

  • So a container runs on a host that said we have

  • a database container and a container

  • has a virtual file system where the data

  • is usually stored.

  • But here there is no persistence,

  • so if I were to remove the container or stop

  • it and restart the container,

  • then the data in this virtual file system is gone.

  • And it starts from a fresh state,

  • which is obviously not very practical because I want

  • to save the changes that my application is making

  • in the database, and that's where I need volumes.

  • So what are the Docker volumes exactly? So on a host,

  • we have a physical file system. Right.

  • And the way volumes work is that we plug

  • the physical file system. Perth. Could be a folder, a directory,

  • and we plug it into the containers

  • file system. So in simple terms,

  • a directory folder on a host file system is mounted

  • into a directory of folder in the virtual

  • file system of Docker.

  • So what happens is that when a container

  • writes to its file system,

  • it gets replicated or automatically written

  • on the host file system directory and vice versa.

  • So if I were to change something on the host

  • file system,

  • it automatically appears in the container as well.

  • So that's why when a container restarts,

  • even if it starts from a fresh start in its

  • own virtual file system,

  • it gets the data automatically from that from the host

  • because the data is still there.

  • And that's how data is populated on the up

  • of a container every time you restart. Now,

  • there are different types of docker volumes

  • and so different ways of creating them.

  • Usually the way to create Docker volumes

  • is using Docker Run Command.

  • So in the document there's an option called minus V,

  • and this is where we define the connection

  • or the reference between the host directory

  • and the container directory.

  • And this type of volume definition is called

  • host volume.

  • And the main characteristic of this one

  • is that you decide where on the host file

  • system that references made.

  • So which folder on the host file system

  • you mount into the container.

  • So the second type is where you create a volume just

  • by referencing the container directory

  • so you don't specify which directory on the host

  • should be mounted.

  • But that's taking care of the docker itself.

  • So that directory is, first of all,

  • automatically created by Docker under the Varly Docker volumes.

  • So for each container there will be a folder

  • generated that gets mounted automatically to the container.

  • And this type of volumes are called anonymous

  • volumes because you don't have a reference

  • to this automatically generated folder. Basically,

  • you just have to know the path.

  • And the third volume type is actually an improvement

  • of the anonymous volumes and it specifies the name

  • of that folder on the host file system

  • and the name is up to you.

  • It just to reference the directory and that type

  • of volumes are called named volumes. So in this case,

  • compared to anonymous volumes,

  • you you can actually reference that volume just

  • by name so you don't have to know exactly the path.

  • So from these three types,

  • the mostly used one and the one that you should be

  • using in a production is actually the named

  • volumes because there are additional benefits

  • to letting Docker actually manage those

  • volume directories on the host.

  • Now they showed how to create Docker volumes

  • using Docker run commands.

  • But if you're using Docker Campo's,

  • it's actually the same.

  • So this actually shows how to use volume definition's

  • in a dark recomposed,

  • and this is pretty much the same as in Docker run commands.

  • So we have volume attributes and underneath

  • you define your volume definition,

  • just like you would in these miners.

  • We option and here we use a named volume.

  • So DBE the data will be the name reference name

  • that you can just think of. Could be anything and inva.

  • The my school data is the path in the container.

  • Then you may have some other containers and at the end.

  • So on the same level as the services

  • you would actually list all the volumes

  • that you have defined.

  • You define at least the volumes that you want

  • to mount into the containers.

  • So if you were to create volumes for different containers,

  • you would list them all here and on the container level,

  • then you actually define and which path that specific

  • volume can be mounted.

  • And the benefit of that is that you can

  • actually mount a reference of the same folder on the host

  • to more than one containers,

  • and that would be beneficial if those

  • containers need to share the data. In this case,

  • you would want the same volume name or reference to two

  • different containers and you can mold them

  • into different path inside of the container even.

  • In this video, we are going to look at Dr.

  • Volumes in practice,

  • and this is a simple no just to be application

  • that we are going to attach the volume to so that we don't

  • lose the database data every time we restart the container.

  • So let's head over

  • to the console and I'm

  • going to start the Monga

  • debate with the Campos,

  • so this is how the campus looks like.

  • We're going to start the Mongar debate on container

  • in the Mongul express container so that we have

  • a UI to it. So I'm going to execute the Dr.

  • Campos.

  • Which is going to start to be

  • in the Mongar express.

  • So when he started,

  • I'm going to check that Mongol Express is running

  • on Port 80 80.

  • And here we see just the default databases.

  • So these are just created by default on startup.

  • And we're going to create our own one for

  • the No JS application. And inside of that database,

  • I'm going to create user's collection.

  • So these are the prerequisites or these are the things that my

  • Narges application needs. So this one here.

  • In order to connect to the database, my DP,

  • this is what we just created,

  • might be an inside of that to the collection

  • called users, so they start the application.

  • Which is running on three

  • thousand here.

  • And this is our EP,

  • which when I read something here.

  • Will write the changes

  • to my database. Now, if I were to restart now,

  • the Hmong would be container. I would lose all this data,

  • so because of that,

  • we're going to use named volumes inside

  • of the Dr Campos file to persist all this data in the Mungindi.

  • Let's head over to the campus.

  • So the first step is to define what volumes I'm going to be

  • using in any of my containers and I'm going to do

  • that on the services level.

  • So here I find the list of all the volumes that I'm going

  • to need in any of my containers.

  • And since we need data persistency for Monga to be,

  • we're going to create Mongul data volume here. Now,

  • this is going to be the name of the volume reference,

  • but we also need to provide here a driver local.

  • So the actual storage pairs that we're going to see later,

  • once it's created,

  • it is actually created by Tokura itself.

  • And this is a kind of an information,

  • additional information for Docker to create

  • that physical storage on a local file system.

  • So once we have a name reference to a volume defined,

  • we can actually use it in the container. So here.

  • I'm going to say volumes. And here,

  • I'll define a mapping between the Mongo data

  • volume that we have on our host,

  • and the second one will be the path inside of the Mongar

  • to be container.

  • It has to be the path where Mongar to explicitly

  • persist its data. So, for example,

  • if you check it out online.

  • You see that the default path where Monga be stores,

  • its data is data slash,

  • data slash deep and we can actually check that out.

  • So if I say Dunkerque s and go inside the container.

  • It's minus I t.

  • I can actually see the data deep in here is all the data

  • that Mongo DB actually holds, but this is, of course,

  • only the container. So in the container restarts,

  • the data get regenerated. So nothing persists here.

  • So this is the path inside of the container,

  • not on my host that we need to reference in the volumes here.

  • So we're attaching our volume on the host to data slash data

  • slash deep inside of a container. So,

  • for example, for my school,

  • it's going to be for my school for postgrads.

  • It's also going to be whateva Leape PostgreSQL data so each

  • database will have its own.

  • So you'd have to actually find the right one.

  • So what this means is that all the data that we just saw here,

  • all of this will be replicated on a container

  • stored up on our host on this persistent volume

  • that we defined here and vice versa,

  • meaning when a container restarts,

  • all the data that is here will be

  • replicated inside of that directory,

  • inside of a container.

  • So now that we have defined that,

  • let's actually restart the document post.

  • And. We started.

  • So once we create the data.

  • And I'm going

  • to the collection and let's

  • actually change this one.

  • All here.

  • And update it so we have the data here.

  • So now that we have

  • the persistent volume defined,

  • if I were to restart all these containers,

  • these data should be persisted soon.

  • The next restart,

  • I should see the database might be

  • collection and the entry here. So let's do that.

  • I'm going to do.

  • Great, so let's check. See, the database is here,

  • the collection is here, and the entry has persisted.

  • So now let's actually see

  • where the doctor volumes

  • are located on our

  • local machine,

  • and that actually differs between the operating systems,

  • for example,

  • on a Windows laptop or a computer,

  • the path of the Docker volume will be its

  • program data docker volumes.

  • The program data Docker folder actually contains all

  • the other container information.

  • So you would see other folders in the Docker directory.

  • Besides the volumes on Linux,

  • the path is actually for leap volumes,

  • which is comparable to the Windows pad.

  • So this is where the docker saves all

  • this configuration and the data. And on the Mac,

  • it's also the same one inside of this volumes directory.

  • You actually have a list of all the volumes that one

  • or many containers are using,

  • and each volume has its own hash,

  • which is or which has to be unique and then slash

  • underscore data will actually contain all the files and all

  • the data that is persisted.

  • Let's head over to the come in line and actually

  • see the volumes that we persisted for longer.

  • Now, an interesting note here is that if I were to go

  • to this path that I just showed you in the presentation,

  • which is var leave docker,

  • you see there is no such a directory.

  • So that could be a little bit confusing.

  • But the way it works on MK, specifically on Linux,

  • you would actually have that PEV directly on your host,

  • but then make it a little bit different.

  • So basically what happens is that docker for mech

  • application seems to actually create a Linux VM

  • in the background and store all the Docker

  • information or Docker data about the containers and the volumes,

  • et cetera, inside of that VMS storage.

  • So if we execute this command here.

  • So this is actually the physical storage

  • on my laptop that I have where all the data is stored.

  • But if I execute this command,

  • I actually get the terminal of that VM. And inside here,

  • if I look,

  • I have a virtual different virtual filesystem.

  • And I can find that path that I showed you here,

  • so it's Vare Leap, Dr. See,

  • so I have all this stock information here.

  • I have the containers folder and I have

  • volumes folder. So this is the one we need.

  • Some of those actually go to the volumes,

  • and this is a list of volumes that I have

  • created and this is the one that came from our who compose,

  • right? This is the name of our F this is this is what Dr.

  • Campos actually takes as the name.

  • You can actually take a look here.

  • So when it's creating these containers,

  • it depends this name as a prefix.

  • And then there is Mongo to be in our

  • volume has the same pattern.

  • It has the prefix and then mongered data.

  • This is the name that we defined here.

  • So now if we look inside of that Monga data

  • volume directory.

  • We see that underscore data.

  • This

  • would be the anonymous volumes,

  • so basically here you don't have a name reference,

  • it's just some random unique ID,

  • but it's the same kind of directory as this one here.

  • The difference being that this one has a name.

  • So it's more it's easier to reference it with a name.

  • So this is anonymous volume. This is a name volume.

  • But the contents will be used in the same way.

  • So here is you see in this underscore data,

  • we have all the data that Mongered uses.

  • So this will be where it gets thirtyfold,

  • databases and the changes that we make

  • through our application inside.

  • And if I go inside of the containers. Remember,

  • this volume is attached to.

  • Mongo DB and is replicated inside of the container,

  • underpays data slash Debe,

  • so if we go inside of the container. Actually,

  • right here.

  • Yes.

  • They should be we'll see

  • actually the same

  • kind of data here,

  • so we have all this index and collection files just

  • like we did in this one.

  • So now whenever we make changes to our application,

  • for example, we change it to Smith and.

  • This will make the container update its data and that will

  • cascade into these volumes directory that we have

  • here so that on the next stop of a container,

  • when the data SDB is totally empty,

  • it will actually populate this directory with the data

  • from this persistent volume so that we will see all

  • the data that we created through our

  • application again on startup. And that's how volumes work.

  • In order to end that screen

  • session that we just started,

  • because it doesn't work in this case,

  • somehow on Mac,

  • you can actually click on control a key

  • and then just type Y. And the session will be closed.

  • So when you do screen Ellis,

  • you should see actually it's terminating.

  • Congratulations. You made it to the end.

  • I hope you learned a lot and got some valuable

  • knowledge from this course.

  • Now that you've learned all about containers and technology,

  • you can start building complex applications with tens

  • or even hundreds of containers. Of course,

  • these containers would need to be deployed across multiple

  • servers in a distributed way.

  • You can imagine what overhead and headache

  • it would be to manually manage those hundreds of containers.

  • So as a next step, you can learn about container,

  • orchestration, tools and communities in particular,

  • which is the most popular tool to automate this task.

  • If you want to learn about communities,

  • be sure to check out my tutorials on that topic

  • and subscribe to my channel for more

  • content on modern day tools. Also,

  • if you want to stay connected,

  • you can follow me on social media or join

  • the private Facebook group. I would love to see you there.

  • So thank you for watching and see you in the next video.

>> SPEAKER 1: Hello and welcome to this complete, Docker course,

Subtitles and vocabulary

Click the word to look it up Click the word to find further inforamtion about it