Thursday, 1 October 2020

 Things to learn

distributed tracing systems

Containers are designed to run only a single process per container (unless the process itself spawns child processes). If you run multiple unrelated processes in a single container, it is your responsibility to keep all those processes running, manage their logs, and so on. For example, you’d have to include a mechanism for automatically restarting individual processes if they crash. Also, all those processes would log to the same standard output, so you’d have a hard time figuring out what process logged what.

Therefore, you need to run each process in its own container. That’s how Docker and Kubernetes are meant to be used.

Because you’re not supposed to group multiple processes into a single container, it’s obvious you need another higher-level construct that will allow you to bind containers together and manage them as a single unit. This is the reasoning behind pods.

A pod of containers allows you to run closely related processes together and provide them with (almost) the same environment as if they were all running in a single container, while keeping them somewhat isolated. This way, you get the best of both worlds. You can take advantage of all the features containers provide, while at the same time giving the processes the illusion of running together.

In the previous chapter, you learned that containers are completely isolated from each other, but now you see that you want to isolate groups of containers instead of individual ones. You want containers inside each group to share certain resources, although not all, so that they’re not fully isolated. Kubernetes achieves this by configuring Docker to have all containers of a pod share the same set of Linux namespaces instead of each container having its own set.

Because all containers of a pod run under the same Network and UTS namespaces (we’re talking about Linux namespaces here), they all share the same hostname and network interfaces. Similarly, all containers of a pod run under the same IPC namespace and can communicate through IPC.

But when it comes to the filesystem, things are a little different. Because most of the container’s filesystem comes from the container image, by default, the filesystem of each container is fully isolated from other containers. However, it’s possible to have them share file directories using a Kubernetes concept called a Volume

One thing to stress here is that because containers in a pod run in the same Network namespace, they share the same IP address and port space. This means processes running in containers of the same pod need to take care not to bind to the same port numbers or they’ll run into port conflicts. But this only concerns containers in the same pod. Containers of different pods can never run into port conflicts, because each pod has a separate port space. All the containers in a pod also have the same loopback network interface, so a container can communicate with other containers in the same pod through localhost.

All pods in a Kubernetes cluster reside in a single flat, shared, network-address space, which means every pod can access every other pod at the other pod’s IP address. No NAT (Network Address Translation) gateways exist between them. When two pods send network packets between each other, they’ll each see the actual IP address of the other as the source IP in the packet.

If both the frontend and backend are in the same pod, then both will always be run on the same machine. If you have a two-node Kubernetes cluster and only this single pod, you’ll only be using a single worker node and not taking advantage of the computational resources (CPU and memory) you have at your disposal on the second node. Splitting the pod into two would allow Kubernetes to schedule the frontend to one node and the backend to the other node, thereby improving the utilisation of your infrastructure.

Another reason why you shouldn’t put them both into a single pod is scaling. A pod is also the basic unit of scaling. Kubernetes can’t horizontally scale individual containers; instead, it scales whole pods. If your pod consists of a frontend and a backend container, when you scale up the number of instances of the pod to, let’s say, two, you end up with two frontend containers and two backend containers.

Usually, frontend components have completely different scaling requirements than the backends, so we tend to scale them individually. Not to mention the fact that backends such as databases are usually much harder to scale compared to (stateless) frontend web servers. If you need to scale a container individually, this is a clear indication that it needs to be deployed in a separate pod.

when deciding whether to put two containers into a single pod or into two separate pods, you always need to ask yourself the following questions:

  • Do they need to be run together or can they run on different hosts?
  • Do they represent a single whole or are they independent components?
  • Must they be scaled together or individually?

Basically, you should always gravitate toward running containers in separate pods, unless a specific reason requires them to be part of the same pod.

Saturday, 11 April 2020

New Baby Shopping

1. Many Poko Pant : New born baby(0-5 kg)

2. Mamaearth Nourishing Baby Hair Oil with Almond & Avocado, 100ml

3. Baby Vipes:

Little's Soft Cleansing Baby Wipes with Aloe Vera, Jojoba Oil and Vitamin E

4. Dry Sheet

Bey Bee Water Resistant Bed Protector Baby Dry Sheet with Ultra absorbance

5. Baby Vest: New Born Infant Sleeveless Sando Vest Housiry

Sunday, 15 December 2019

Marriage Shopping

Necessary household item after marriage.
I purchased following items:

1. Refrigerator:

235 liter LG refrigerator
1. (I purchased this one)

Other good options:


2. Washing machine

LG 6.2 kg Inverter Fully-Automatic Top Loading Washing Machine ( T7288NDDLG.ASFPEIL, Middle Free Silver)

3. TV

Sony Bravia 80 cm (32 Inches) Full HD LED Smart TV KLV-32W672F (Black) (2018 model)

4. Burner:

Elica Vetro Glass Top 3 Burner Gas Stove (703 CT VETRO BLK)

5. BED options:

Evok Texas Engineerwood Queen Bed with Storage

Can go with other Evok options as well as per budget
Purchase in wood options: Solid wood (costly), plywood, HDF(high density fiberboard)

6. Mattress

Sleepwell Esteem Firmtec Mattress 

7. Inverter/Battery

Amaron AAM-TT-CR00150TT Plastic Tall Tubular 150Ah Battery

Amaron Hi Life Pro 900Va Pure Sinewave Home Ups Inverters

8. Water purifier

Kent Ace Mineral 7-Litre 60-Watt RO+UV+UF Water Purifier (White and Aquamarine)

9. Mixer Grinder

Philips Viva Collection HL7701/00 750-Watt Mixer Grinder with 4 Jars (Elegant Lavender and White)

10.Cloth Dryer Stand

TNC Made in India Rust Free Floor Mounted Clothes Drying Stand Stainless Steel Floor Cloth Dryer Stand (Blue)

11. Hand Blender

Philips Daily Collection HL1655/00 250-Watt Hand Blender (White)

12. Bed Sheet

Bombay Dyeing Cotton Double Bed Sheet Breeze

13. Roti Tawa:

14. Dosa Tawa

Hawkins Futura Non Stick Dosa Tawa, 33cm, Black

15. Dohar / Blanket

16. Neelkamal Chair

17. Ajanta Wall clock:

18. Scissor

19. Plastic Chopper
19.Cello Oasis Centre Table (Ice Brown)

20. Nayasa Store-in Plastic Container, 3-Pieces, Blue

21. Screwdriver
Spartan S-6 Screwdriver Kit (Assorted, 6-Pieces)

Tuesday, 5 May 2015

Check befire Down casting in Java

Before downcasting, check for the dynamic type of the object and then downcast.
StringBuffer str = new StringBuffer("Hello");
Object obj = str;
if(obj instanceof String)
String strBuf = (String) obj;

Daemon thread in Java

A daemon thread is a thread, that does not prevent the JVM from exiting when the program finishes but the thread is still running.
An example for a daemon thread is the garbage collection.
You can use the setDaemon() method to change the Thread daemon properties.

setDamon(boolean) can only be called before the thread has been started.
By default the thread inherits the daemon status of its parent thread.

  • When a new thread is created it inherits the daemon status of its parent.
  • Normal thread and daemon threads differ in what happens when they exit.                           When the JVM halts any remaining daemon threads are abandoned: finally blocks are not executed, stacks are not unwound - JVM just exits. Due to this reason daemon threads should be used sparingly and it is dangerous to use them for tasks that might perform any sort of I/O.

public class DaemonTest
  public static void main(String[] args)
     new WorkerThread().start();
     catch (InterruptedException e)
    System.out.println("Main Thread ending") ;

class WorkerThread extends Thread
  public WorkerThread()
     setDaemon(true) ; // When false, (i.e. when it's a user thread), // the Worker thread continues to run. // When true, (i.e. when it's a daemon thread), // the Worker thread terminates when the main // thread terminates.
   public void run()
      int count=0 ;
      while (true)
        System.out.println("Hello from Worker "+count++) ;
        catch (InterruptedException e)

Sorting Techniques use in Java

According to the Java 7 API doc for primitives:
Implementation note: The sorting algorithm is a Dual-Pivot Quicksort by Vladimir Yaroslavskiy, Jon Bentley, and Joshua Bloch. This algorithm offers O(n log(n)) performance on many data sets that cause other quicksorts to degrade to quadratic performance, and is typically faster than traditional (one-pivot) Quicksort implementations.
According to the Java 7 API doc for objects:
The implementation was adapted from Tim Peters's list sort for Python ( TimSort). It uses techniques from Peter McIlroy's "Optimistic Sorting and Information Theoretic Complexity", in Proceedings of the Fourth Annual ACM-SIAM Symposium on Discrete Algorithms, pp 467-474, January 1993.