4.1, due on October 9
Something that confused me was the difference between an optimal value and an optimizer. Remark 4.1.2 discusses that difference, but I'm at a loss for what those actually are.
The example about Fibonacci numbers was really interesting, because I've build Fibonacci number generators in Python and Java but they always take forever for large inputs. But implementing the algorithm with memoization makes it work very quickly! It's a neat idea. In regards to the issue with spatial complexity, I feel like the high memory cost can be mitigated by working iteratively and deleting previous results. Of course, this wouldn't be effective if the program needed to run multiple times and wanted temporal savings after running multiple times. But this way the spatial complexity is constant because it only needs a variable to store the current n-value and two variables for the current Fibonacci number values. Also, the program could store just the results and the previous values (n and n-1) each time, which would act as a bookmark so the program could skip to it and then move backward or forward in Fibonacci values. I feel like a program like this would have spatial complexity of O(1) and temporal complexity of at most O(n) but very often lower than that.
OK, so I just read the subsection on bottom-up approaches. This algorithm is equivalent to that one, plus the bookmarking idea to reduce temporal complexity after running several times.
OK, so I just read the subsection on bottom-up approaches. This algorithm is equivalent to that one, plus the bookmarking idea to reduce temporal complexity after running several times.
Comments
Post a Comment