Calculator Inputs
This tool optimizes a two-variable objective surface using selection, crossover, mutation, and elitism. Results appear above this form after submission.
Example Data Table
These sample settings show how different landscapes may need different search behavior.
| Objective | Goal | Bounds | Population | Generations | Crossover | Mutation | Elite | Typical Good Outcome |
|---|---|---|---|---|---|---|---|---|
| Sphere Function | Minimize | -5 to 5 | 50 | 80 | 0.85 | 0.08 | 3 | Near 0.000000 |
| Rastrigin Function | Minimize | -5.12 to 5.12 | 80 | 120 | 0.85 | 0.12 | 4 | Near the global basin |
| Rosenbrock Valley | Minimize | -3 to 3 | 100 | 180 | 0.90 | 0.10 | 5 | Approaches (1,1) |
| Shifted Quadratic Peak | Maximize | -10 to 10 | 60 | 90 | 0.80 | 0.07 | 4 | Score close to 25 |
| Sine-Cosine Mixed Surface | Maximize | -10 to 10 | 100 | 160 | 0.88 | 0.15 | 5 | Strong local peak discovery |
Formula Used
1) Objective score
The calculator evaluates each candidate pair (x, y) with the selected objective surface.
- Sphere: f(x,y) = x² + y²
- Rastrigin: f(x,y) = 20 + x² + y² - 10(cos(2πx) + cos(2πy))
- Rosenbrock: f(x,y) = (1 - x)² + 100(y - x²)²
- Quadratic Peak: f(x,y) = 25 - (x - 3)² - (y + 2)²
- Sine-Cosine Mix: f(x,y) = 15 + 10sin(x)cos(y) - 0.1(x² + y²)
2) Fitness conversion
For roulette selection, raw scores are shifted into positive fitness values. In minimization, fitter candidates receive larger values when their scores are smaller. In maximization, fitter candidates receive larger values when their scores are larger.
3) Blend crossover
Each child mixes parent genes using a random blend factor: child = α(parent A) + (1-α)(parent B). Separate blend factors are used for x and y.
4) Mutation
Mutation perturbs a gene by a fraction of the search range: gene' = gene + random(-1,1) × mutationScale × range. The result is then clamped inside the selected lower and upper bounds.
5) Improvement estimate
For minimization, improvement compares how much the best score decreases. For maximization, improvement compares how much the best score increases. The page reports percentage change between the initial best candidate and the final best candidate.
How to Use This Calculator
- Select an objective function with the search behavior you want to study.
- Choose the population size and the number of generations.
- Set crossover rate, mutation rate, and mutation scale.
- Define X and Y bounds for the search space.
- Pick a selection method and the elite count.
- Enter a random seed if you want reproducible runs.
- Click Run Optimization to generate results above the form.
- Review the best solution, convergence graph, and export buttons.
Frequently Asked Questions
1) What does this calculator optimize?
It optimizes a two-variable mathematical surface using a genetic algorithm. You can study minimization and maximization behavior, compare objective landscapes, and inspect how search settings influence convergence, diversity, and solution quality.
2) How do population size and generations affect results?
Larger populations explore more candidates per generation, while more generations allow longer refinement. Both usually improve results, but they also increase total evaluations and runtime. Balance them according to difficulty and the ruggedness of the surface.
3) When should I raise mutation rate?
Raise mutation when the population converges too early or gets trapped near a weak local optimum. Lower mutation when solutions already improve steadily and excessive randomness starts preventing refinement near a promising region.
4) Why use elitism?
Elitism copies the best candidates into the next generation unchanged. This protects strong solutions from being lost during crossover and mutation. Too much elitism can reduce diversity, so keep the elite count small compared with the total population.
5) Can one run guarantee the global optimum?
No. Genetic algorithms are stochastic. A single run can find an excellent answer, but not always the true global optimum. Repeat runs with different seeds, compare convergence plots, and verify whether the best result remains stable.
6) genetic algorithm rocket staging dead weight optimization
A genetic algorithm can optimize stage mass ratios, structural fractions, and separation timing while respecting thrust and delta-v constraints. Dead weight is penalized in the fitness function, helping the search favor lighter staging plans that still meet mission requirements.
7) genetic algorithms vs. greedy algorithms in the optimization of course scheduling
Greedy methods are fast and simple, but they often settle for locally good schedules. Genetic algorithms explore many timetable combinations, handle competing constraints better, and can discover higher-quality schedules when teacher availability, room limits, and student conflicts interact strongly.
8) Which objective is best for learning?
Start with Sphere because it is smooth and easy to understand. Move next to Rastrigin or Rosenbrock to see how local minima and narrow valleys challenge the search. Then test maximization surfaces for broader comparison.