In this blog post we give a solution to problem 6 of IMO 2021. The method used differs from those I have seen and makes it possible to achieve an even sharper estimate. Let me first recall the problem.

IMO 2021, p6. Let be an integer, a finite set of integers (not necessarily positive) and subsets of . Suppose that, for every , the sum of the elements of is . Prove that contains at least elements.

Solution. In fact, we’ll prove **something sharper**: contains at least elements. (it could be even further improved, see end of post) Let . We must prove . Any subset of can be represented as an dimensional vector of zeroes and ones in an usual way, resp. if resp. For each set let be the dimensional binary vector that corresponds to . The idea is not unusual – for the sake of contradiction assume that and prove that for some the vector is a linear combination of the previous vectors with coefficients of magnitude less than i.e.

It implies which is contradiction.

So, **it boils down to a pure linear algebra question** which has a standalone value.

**Derived problem**. Let be dimensional binary vectors and . Then, some of them, say, vector can be represented as in .

We will provide a **constructive algorithm** that finds a representation as in . Let be the matrix which columns are the vectors (in this order). Note that . We may assume since if we can choose linearly independent rows of and proceed with the same arguments. Let us rearrange the columns of (i.e. vectors ) into a new matrix of columns so that the first columns (vectors) are linearly independent.

**First stage**. Trough series of steps that swaps columns we ensure that each is a linear combination of vectors among with lesser indices than

Initially, we set On the -th step we inspect the matrix with columns (which is some permutation of the initial vectors). Suppose there exists a column which is (uniquely) represented as

and additionally for some with we have In this case we swap the vectors/columns and thus obtaining a new matrix Note that the first columns of still remain linearly independent. Further, we proceed to the next step.

Obviously, we cannot infinitely pump columns with lesser indices into the first columns, so this procedure ends at some moment. Denote the final matrix by and let its columns be This permutation of the initial vectors satisfies the following property, we could call it of *increasing order representation*. (Don’t search for it, I just invented it)

**Definition**. Given a system of dimensional vectors , where is a permutation of and we say that this permutation of vectors is of i*ncreasing order representation* if the following two conditions hold: (i) The first vectors are linearly independent. (ii) Any vector where is represented (uniquely) as and it holds

**Second stage**. At this stage we begin to represent consecutively each column of the matrix as a combination of the first columns. If there is a coefficient with large magnitude in this representation, we swap the corresponding columns. We prove that at some moment all the coefficients of the linear combination will be “small”.

We start with the matrix which columns are the vectors and assume otherwise we rearrange appropriately these vectors. We apply consecutively for the following procedure:

**Procedure at step** Let the matrix has columns We write

It can be done since the columns of form a permutation of vectors of *increasing order representation* and its first columns are linearly independent. If all coefficients in have absolute values less than we stop. If there exists a coefficient with we swap the -th column with and thus obtaining a new matrix Denote by the square matrix formed by the first columns of . Cramer’s rule yields therefore

Note that the columns of the new matrix also form a permutation of *increasing order representation*.

We claim that the described procedure finally stops at some at an instance that satisfies

and additionally

Indeed, suppose for the sake of contradiction it’s not true. It means holds for therefore

Since is a non-zero integer it follows

Recall that is a matrix, each of which elements is either or For any such matrix it holds

This fact was proven in the previous blog post with even a sharper estimate, but let us prove it briefly again. Let be the columns of viewed as vectors. Each is a dimensional binary vector, hence Further

which proves . Putting and together, it yields

hence , contradiction.

### Further (slight) improvement.

In fact the same technique (inequalities (4) and (5)) allows to further slightly improve the estimate and get

where is some absolute constant.