| A. Finding a zero of f' | ||
| B. Approximating f by a polynomial P and differentiating P | ||
| C. Differentiating a Fourier transform of f and then applying an inverse transform | ||
| D. Finding the slope of f at two randomly generated points | ||
| E. None of the above |
| A. 2.0 | ||
| B. 2.1 | ||
| C. 1.9 | ||
| D. 0.21 | ||
| E. 0.19 |
| A. Every difference formula has a truncation error formula that can be minimized. | ||
| B. High degree interpolating polynomials oscillate too much. | ||
|
C. Underflows increase as |
||
|
D. Overflows increase as |
||
|
E. Rounding errors increase as |
| A. 2.000 | ||
| B. 1.999 | ||
| C. 1.989 | ||
| D. 1.899 |
|
A. Centered difference formula at each |
||
| B. 3-point difference formulas, forward difference at left endpoint, backward difference at right endpoint, and centered difference elsewhere | ||
| C. Backward difference on left half of gridpoints, and forward difference on right half | ||
|
D. Alternating forward and backward differences, beginning with backward difference at |
|
A. |
||
|
B. |
||
|
C. |
||
|
D. |
| A. 2.0 | ||
| B. 2.1 | ||
| C. 1.9 | ||
| D. 0.21 | ||
| E. 0.19 |
| A. They use a Taylor polynomial for f'. | ||
| B. They integrate a piecewise polynomial approximation to f. | ||
| C. They are a type of Monte Carlo method. | ||
| D. They require orthogonal polynomial integration. | ||
| E. None of the above |
| A. 0.3750 | ||
| B. 0.5000 | ||
| C. 0.6667 | ||
| D. 0.3333 | ||
| E. 0.2500 |
| A. 0.3750 | ||
| B. 0.5000 | ||
| C. 0.3333 | ||
| D. 1.500 | ||
| E. 0.3125 |
| A. Subdivides if the integral is too large. | ||
| B. It subdivides if the derivative is too large. | ||
| C. It subdivides if rounding error exceedes truncation error. | ||
| D. It subdivides if error estimate is too large. |
| A. Using a left sided Riemann sum | ||
| B. Approximating f by a polynomial P and integrating P | ||
| C. Integrating a Fourier transform of f and then applying an inverse transform | ||
| D. Finding the average height of f at n randomly generated points | ||
| E. None of the above |
| A. 13.02 | ||
| B. 13.01 | ||
| C. 13.0 | ||
| D. 13.1 | ||
| E. None of the above |
| A. 31% | ||
| B. 0.449 | ||
| C. 44.9% | ||
| D. 0 | ||
| E. 1.449 |
| A. 1 | ||
| B. 0.4 | ||
| C. 0 | ||
| D. 0 | ||
| E. 1.3 |
| A. t-g | ||
| B. g | ||
| C. All t | ||
| D. 0 | ||
| E. It depends on the compiler. |
| A. Both commutative and associative | ||
| B. Commutative but not associative | ||
| C. Neither commutative nor associative | ||
| D. Associative but not commutative |
| A. Swamping does not violate the fundamental axiom of floating point arithmetic, but cancellation does. | ||
| B. Cancellation loses precision, while swamping does not. | ||
| C. Swamping loses precision, while cancellation does not. | ||
| D. Swamping can only happen with multiplication and cancellation only with addition. |
| A. Underflow | ||
| B. Overflow | ||
| C. The distance between 1 and the nearest float to 1 | ||
| D. 1/Overflow |
| A. y underflowed. | ||
| B. y=0. | ||
| C. |y| is less than |x|*(machine epsilon). | ||
| D. fl(0+y)=0. |
| A. The floats are farther apart but have a larger range. | ||
| B. The floats are nearer each other but have a smaller range. | ||
| C. There are more floats. | ||
| D. Underflow is smaller. | ||
| E. The machine precision grows. |
|
A. |
||
|
B. |
||
|
C. |
||
|
D. |
||
|
E. |
|
A. |
||
|
B. |
||
|
C. |
||
|
D. |
| A. Highly accurate | ||
| B. Rounding correct | ||
| C. Backward stable | ||
| D. Well conditioned | ||
| E. Robust |
| A. Swamping | ||
| B. Machine epsilon | ||
| C. Truncation error | ||
| D. Cancellation | ||
| E. None of the above |
| A. Backward stable | ||
| B. Well conditioned | ||
| C. Robust | ||
| D. Ill conditioned | ||
| E. Highly accurate |
|
A. |
||
|
B. |
||
|
C. |
||
|
D. |
|
A. |
||
|
B. |
||
|
C. |
||
|
D. |
| A. Problem is stable and method is backward stable. | ||
| B. Problem is stable and method is well conditioned. | ||
| C. Problem is well conditioned and method is well conditioned. | ||
| D. Problem is well conditioned and method is backward stable. |
| A. Cancellation limit | ||
| B. Machine epsilon | ||
| C. Underflow | ||
| D. Backward error | ||
| E. None of the above |
| A. 0.003 | ||
| B. 0.00324 | ||
| C. 0.003242 | ||
| D. 0.00 | ||
| E. None of the above |
| A. 142 | ||
| B. 142.324 | ||
| C. 142.3 | ||
| D. 142.32 | ||
| E. None of the above |
| A. Chopping | ||
| B. Cancellation | ||
| C. Truncation Error | ||
| D. Swamping | ||
| E. Tail Error |
|
A. |
||
|
B. |
||
|
C. Given |
||
|
D. Find |
||
|
E. |
|
A. |
||
|
B. |
||
|
C. |
||
|
D. |
| A. An interpolator and a numerical differentiation rule | ||
| B. A quadrature rule and an IVP solver | ||
| C. A root finder and a quadrature rule | ||
| D. An IVP solver and a root finder |
|
A. |
||
|
B. |
||
|
C. |
||
|
D. |
||
| E. None of the above |
| A. Because of rounding errors | ||
| B. Because of truncation errors | ||
|
C. As |
||
|
D. As |
|
A. |
||
|
B. |
||
|
C. |
||
|
D. |
||
| E. None of the above |
|
A. Because it averages 3 values in |
||
| B. Because Euler's method is unstable | ||
| C. Because the corrector needs a prediction | ||
| D. Because a 3-step method needs 2 previous approximations to y | ||
| E. Because a 3-step method needs 3 previous approximations to y |
| A. Euler's method is unstable. | ||
|
B. Eulers method has local truncation error |
||
| C. Euler's method has large rounding errors. | ||
| D. Euler's method is too slow for a 3 step method. | ||
| E. Taylor methods are more general. |
| A. The corrector is typically a higher order Runge Kutta method. | ||
| B. The corrector is typically a low order Runge Kutta method. | ||
| C. The corrector is typically an implicit method. | ||
| D. The corrector is typically a higher order explicit multistep method. |
| A. 1.297 | ||
| B. 1.015 | ||
| C. 0.7650 | ||
| D. 1.547 |
| A. 1 | ||
| B. 2 | ||
| C. 4 | ||
| D. 5 | ||
| E. 6 |
|
A. |
||
|
B. |
||
|
C. |
||
|
D. |
| A. Taylor's method | ||
| B. Adams-Bashforth's method | ||
| C. Adams-Moulton's method | ||
| D. Runge-Kutta's method |
| A. 3.00 | ||
| B. 3.75 | ||
| C. 2.00 | ||
| D. 4.55 | ||
| E. 6.30 |
| A. A good starting guess | ||
| B. An error estimate | ||
| C. Multistep method | ||
| D. A list of allowable step sizes |
|
A. |
||
|
B. |
||
|
C. |
||
|
D. |
|
A. |
||
|
B. |
||
|
C. |
||
|
D. |
| A. 1 | ||
| B. 2 | ||
| C. 4 | ||
| D. 5 | ||
| E. 6 |
| A. 1 | ||
| B. 2 | ||
| C. 4 | ||
| D. 5 | ||
| E. 6 |
|
A. |
||
|
B. |
||
|
C. |
||
|
D. |
| A. Stiff IVP's require a small timestep. | ||
| B. Stiff IVP's require predictor-corrector methods. | ||
| C. Stiff IVP's require Taylor methods. | ||
| D. Stiff IVP's require high order methods. |
|
A. |
||
|
B. |
||
|
C. |
||
|
D. |
|
A. Taylor methods require the evaluation of |
||
| B. Taylor polynomials oscilate too much. | ||
| C. Taylor methods replace derivatives with function evaluations. | ||
| D. Taylor methods are not parallelizable because of nested function evaluations. |
| A. k+1 | ||
| B. k/2 | ||
| C. 2k | ||
| D. k-1 |
|
A. |
||
|
B. |
||
|
C. |
||
|
D. |
|
A. |
||
|
B. |
||
|
C. |
||
|
D. The initial |
||
|
E. |
| A. Partial differential equation | ||
| B. Ill posed differential equation | ||
| C. Side-condition differential equation | ||
| D. Boundary value problem |
|
A. |
||
|
B. |
||
|
C. |
||
|
D. |
|
A. |
||
|
B. |
||
|
C. |
||
|
D. |
|
A. |
||
|
B. |
||
|
C. |
||
|
D. |
|
A. |
||
|
B. |
||
|
C. |
||
|
D. |
|
A. |
||
|
B. |
||
|
C. |
||
|
D. |
|
A. It has degree |
||
|
B. It is |
||
|
C. It is |
||
|
D. It is a root of |
||
| E. It is deflated. |
| A. Newton's, secant, bisection | ||
| B. Newton's, bisection, secant | ||
| C. Secant, Newton's, bisection | ||
| D. Secant, bisection, Newton's | ||
| E. Bisection, Newton's, secant |
| A. The Chinese remainder method | ||
| B. Degree slashing | ||
| C. Synthetic division | ||
| D. Deflation |
| A. Newton's method is more accurate. | ||
|
B. Extracting |
||
| C. It is pretend. | ||
| D. Rounding errors and truncation errors work against each other. |
| A. Its degree is 4. | ||
| B. Its degree is no more than 4. | ||
| C. Its degree is no more than 7. | ||
| D. Its degree must be more than 7. | ||
| E. None of the above |
| A. Hermite | ||
| B. Taylor | ||
| C. Lagrange | ||
| D. Piecewise linear | ||
| E. Quadratic spline |
| A. 2.6 | ||
| B. 2.8 | ||
| C. 3.9 | ||
| D. 1.5 | ||
| E. 2.9 |
|
A. |
||
|
B. |
||
|
C. |
||
|
D. |
||
| E. There is no error; it is well posed. |
| A. Only one | ||
| B. Depends upon the knot positions | ||
| C. n+1 Lagrange basis functions | ||
| D. Infinitely many |
|
A. |
||
|
B. |
||
|
C. |
||
|
D. |
||
| E. None of the above |
| A. 4.1 | ||
| B. 3.8 | ||
| C. 3.9 | ||
| D. 3.5 | ||
| E. 2.9 |
| A. Osculating polynomial | ||
| B. Lagrange interpolator | ||
| C. Hermite interpolator | ||
| D. Vandermonde interpolator | ||
| E. None of the above |
| A. Three basis functions, each of degree 2 | ||
| B. Two basis functions, each of degree two | ||
| C. Two basis functions, each of degree three | ||
| D. Three basis functions, of degree 1, degree 2 and degree 3 |
|
A. |
||
|
B. |
||
|
C. |
||
|
D. |
| A. 4 | ||
| B. 5 | ||
| C. 6 | ||
| D. 7 |
|
A. |
||
|
B. |
||
|
C. |
||
|
D. |
||
| E. None of the above |
|
A. |
||
|
B. |
||
|
C. |
||
|
D. |
||
|
E. |
|
A. |
||
|
B. |
||
|
C. |
||
|
D. |
||
|
E. |
| A. 10.149 | ||
| B. 10.150 | ||
| C. 5.5074 | ||
| D. 9.8500 | ||
| E. 10.001 |
| A. n | ||
| B. 2n | ||
| C. n+2 | ||
| D. n/2 | ||
| E. None of the above |
| A. It is close to the correct answer. | ||
| B. It gives the bisector of the zero. | ||
| C. It is not complex. | ||
| D. It gives function height and slope. | ||
| E. None of the above |
| A. 4 | ||
| B. 5 | ||
| C. 6 | ||
| D. 8 | ||
| E. 10 |
| A. f has exactly one zero in [a,b]. | ||
| B. f has 0, 1, or infinitely many zeros in [a,b]. | ||
| C. f has an even number of zeros in [a,b]. | ||
| D. f has an odd number of zeros in [a,b]. |
| A. Newton's method | ||
| B. Bisection method | ||
| C. Secant method | ||
| D. Meuller's method | ||
| E. False position |
| A. Newton's method | ||
| B. Secant method | ||
| C. Bisection method | ||
| D. None of the above |
| A. Neither Newton's nor bisection can be applied here. | ||
| B. Neither Newton's nor secant can be applied here. | ||
| C. Bisection cannot be applied here. | ||
| D. Newton's method cannot be applied here. |
| A. For each iteration, Newton's method adds 1 correct bit and bisection add about 0.5 correct bits. | ||
| B. For each of the two iterations, Newton's method doubles number of correct bits and bisection adds 2 correct bits. | ||
| C. For each iteration, Newton's method doubles the number of correct bits and bisection adds 1 correct bit. | ||
| D. For each iteration, Newton's method adds two correct bits and bisection adds 1 correct bit. |
|
A. Linearizing |
||
|
B. Bisecting the line from |
||
|
C. Approximating |
||
|
D. Approximating |
||
| E. None of the above |
|
A. |
||
|
B. |
||
|
C. |
||
|
D. |
||
|
E. |
| A. It does not always converge. | ||
| B. It may divide by zero. | ||
| C. It requires the evaluation of f'. | ||
| D. It may get stuck in a cycle. | ||
| E. It requires complex arithmetic. |
|
A. |
||
|
B. |
||
|
C. |
||
|
D. |
|
A. |
||
|
B. |
||
|
C. |
||
|
D. |
|
A. Using the secant angle between |
||
| B. Averaging the Newton method and the bisection method | ||
|
C. Approximating |
||
|
D. Finding the x-intercept of the line joining |
||
| E. None of the above |
| A. k-1 | ||
| B. k | ||
| C. k+1 | ||
| D. k+2 | ||
| E. 2k |
|
A. |
||
|
B. |
||
|
C. |
||
|
D. |
||
|
E. |