Mathematics is the language of science, and every real field of science can be "boiled down" to a set of mathematical theories. The kind of mathematics that is mostly used in computer science is discrete rather than continuous, unlike several other engineering disciplines. (The difference between discrete systems and continuous ones is like the difference between the sets of integers and real numbers.)
The main object of study in theoretical computer science is the algorithm, which can be defined as a specific set of instructions for carrying out a procedure or solving a problem. The difference between an algorithm and a program is that we can implement an algorithm for doing a particular job in any programming language that we like, resulting in a program for that job in that specific language. Since theoretical study is best done by abstracting away irrelevant details and focusing on the fundamentals, we assume that all algorithms that we examine are written in a standard language. (There is a theorem which says that we are justified to make this assumption, i.e. that every algorithm can be written in any programming language.)
Also note that theoretical computer science seems to focus on software, rather than hardware. We are not interested in the materials out of which the computer running our algorithms is built. In this way, we are guaranteeing that the results we obtain from our theoretical analysis will be valid for any machine which is able to carry out the basic operations of our chosen programming language. The particular choice of electronic circuits for building computers that we use nowadays is just a matter of convenience. One can build a computer out of, say, steam pipes and springs, if one wishes.
Just as theoretical physics imposes certain natural limits on physical objects (like the speed of light, for instance), it turns out that there are such "laws of nature" involved with our science as well. Alan Turing proved in 1936 that there are certain natural and meaningful problems for which there is no algorithm. That is, no matter how good a programmer you are, you cannot write a program corresponding to the specification for those unsolvable problems, because the infinite set of all possible programs simply does not contain such a program!
Even if a problem is solvable, this does not necessarily mean that we can happily use our computers to solve it in a reasonable amount of time. Complexity theory is the study of the "difficulty" levels (in terms of the number of algorithm steps required during the solution) of solvable problems. The number of solution steps in "easy" problems (P-Problems) grows relatively slowly as you increase the problem size. If, on the other hand, the number of steps can grow exponentially with the problem size, as in the plot of the function y=ex, the problem is "hard" and is said to be an NP-Problem. Whether easy solutions can be found for many NP-Problems or not is one of the most important unanswered questions of mathematics.
Theoretical computer scientists use abstract machines to model programs and computers in their studies. Abstract machines are mathematical objects which have the fundamental symbol-manipulation characteristics that exist in computer programs. There are several classes of abstract machines, with each class in the hierarchy corresponding to a more "powerful" kind of computing device. Real computers that are constructable in the real world correspond to the abstract machine class of finite state machines. Turing's above-mentioned theorem about unsolvability is based on a much stronger machine class (that of Turing machines) which have infinite memory capacity, so even that hypothethical capability is not enough for solving the unsolvable problems.
There is an interesting relationship between linguistics (the science of languages) and computer science. It turns out that the set of all languages can also be divided to a hierarchy of classes, and that there is a correspondence between the language classes and machine classes; so that "simple" languages can be "understood" by "weak" machines, but one needs a "stronger" machine to understand more complicated languages. It is easy to prove that most languages are so complicated that no machine which understands them can exist. (Human languages are easier, and it seems theoretically possible to build programs which understand and "speak" in those "natural" languages.) The field of computational linguistics focuses on these issues.
Apart from the algorithms which process items of information, the theoretical properties of the information which is processed by those algorithms can also be studied. The branch of mathematics dealing with the efficient and accurate storage, transmission, and representation of information is called information theory.
New advancements in quantum physics seem to make possible the construction
of processors which seem to do a lot of things at the same time, in quite
counterintuitive ways. Researchers of quantum computation hope to
utilize these features of nature to build incredibly fast computers, which
would be promising especially in the area of cryptography (the study
of encryption and its reverse, decryption).