SQRT Decomposition

What is SQRT decomposition?

SQRT decomposition is an optimization for query-like problems. In SQRT decomposition, it is able to answer standard queries in O(sqrt n) on data structures such as arrays and trees. They are limited in their basic forms; range queries can’t be done efficiently (I’m sure there are ways, but other methods like segment trees will be faster).

Why?

A common question is: if other items, such as segment trees, exist that can do the same queries in O(log n), why is SQRT decomposition so often used? It is just a lot easier to implement. Making changes to heavy-light decomposition may be hard, but the SQRT decomposition of a tree for path queries is super simple.

Mo’s Algorithm (Ararys):

Mo’s algorithm is a way to create a structure that is able to process array queries (max, sum, min, xor…) in O(sqrt n). It can do point updates but does not handle range updates very well.

In Mo’s algorithm, it takes the total length of the array (n) and makes sqrt(n) smaller chunks. Each chunk is responsible for remembering the values within its range. Picture shown below.

 

When querying, some can be done in constant time. If the sum along the interval [3, 11] is instructed, Mo’s algorithm would look at the two ranges that are contained between the intervals.

 

When querying something like the sum over [1, 14], it will get the sum from all of the chunks and then simply loop through the values that are in a half-contained sqrt block.

 

One advantage of SQRT decomposition in this scenario over segment trees is an O(1) point update. Since, in a point update, only the value of the block needs to be updated,

 

Unfortunately, Mo’s algorithm doesn’t do well with range queries. Since the chunk is at least in basic form, it can’t be updated. If a chunk of length 4 is in a sum range update (add x to all), then adding 4 * x to the block total only works if the range is always covered in future queries. If only a part of the block is being queried, then the smaller values don’t have the +x query.

Some sort of proposal may be able to help this, but, in the end, it is still O(n).

 

SQRT Decomposition on Trees:

SQRT decomposition also works very similarly on trees. The plan still remains to split the tree into chunks. If the tree has a height of n, then each chunk will contain sqrt(n) levels. A SQRT decomposition for sum path queries can be used as an example (standardly, this can be solved with a heavy-light decomposition in log^2 (n)). Range updates are also still hard to pull off.

In literally every path query, a path is defined as the shortest path between the two vertices mentioned.

In order to do path queries, it is usually easiest to calculate the LCA, then find the paths from each of the nodes to the LCA, and then combine their answers.

Calculating LCA:

There are a couple of ways to get the LCA (Lowest Common Ancetor):

RMQ (a lot of queries): O(n log n) build time/OO(n log n) space, but O(1) LCA time

Binary lifting: O(log n) LCA time

Precalculating Paths:

In trees, the bottom nodes of each chunk are the most important.

Each bottom node will store the sum of the path from that node to the “top” of the segment. So, if a path knows it needs to go through, it can simply move past and take the bottom node’s summed value since every way out of the node from this point is already precalculated.

When querying, this process is done all the way until the pointer is in the same chunk as the LCA, where it then just goes through the paths one by one.
Updates on the tree:

When doing a point change, just update the node value and then alter the bottom node.

As usual, range queries can’t really be done. Since the chunk won’t be correctly represented if the LCA is located in that cell,