The soundness of real-time systems not only depends on the exactness of the results but also on their delivery according to given timing constraints. Real-time schedulers need an estimation of the worst-case execution time (i.e., the longest possible run time) of each process to guarantee that all the deadlines will be met. In addition to be necessary to guarantee the correctness of soft- and hard-real-time systems, the worst-case execution time is also useful in multimedia systems where it can be used to better distribute the computing resources among the processes by adapting the quality of service to the current system load. This dissertation presents a set of static compile-time analyses to estimate the worst-case execution time of Java processes focusing on the approximation of the WCET of fairly large soft-real-time applications on modern hardware systems. In a first phase the program is analyzed to extract semantic information and to determine the maximal (and minimal) number of iterations for each basic block, by possibly bounding every cyclic structure in the program. As a first step, the semantic analyzer, embedded in an ahead-of-time bytecode-to-native Java compiler, performs an abstract interpretation pass of linear code segments delimiting the set of possible values of local variables at a given program point and discovering a subset of the program's infeasible paths. This step is followed by a loop-bounding algorithm based on local induction variable analysis that, combined with structural analysis, is able to deliver fine-grained per-block bounds on the minimum and maximum number of iterations. To resolve the target of method calls the compiler performs a variable type based analysis reducing the call-graph size. In a second phase the duration of the different program's instructions or basic blocks is computed with respect to the used hardware platform and the computational context where the instructions are executed. Modern systems with preemption and modern architectures with non-constant instruction duration (due to pipelining, branch prediction and different level of caches) hinder a fast and precise computation of a program's WCET. Instead of simulating the CPU behavior on all the possible paths we apply the principle of locality, limiting the effects of a given instruction to a restricted period in time and allowing us to analyze large applications in asymptotically linear time. We describe the effectiveness in approximating the worst-case execution time for a number of programs from small kernels and soft-real-time applications.