Posts

Finals preparation, due on December 13

The most important things I learned in this course were big-Oh notation, the Master Theorem, the various algorithm styles available, and probability theory. To review for the test I'll make a list of important theorems and definitions to memorize, and practice proving simple facts about each of them. I'll also go over past tests and make sure I could get 100% on them. In class tomorrow I'd like to go over an example of Monte Carlo integration. I feel like in this class I've learned how to apply the mathematics I've learned to real-world problems, more than in any other class I've taken. I feel that way because I'm most interested in using computers to solve problems, and the principles we've talked about this semester have been foundational to my understanding of how to do that.

8.7, due on December 11

I see the graphs of the Daubechies db2 scaling function, but I really don't understand how to use it since there's no definition that I can easily look at. We'll need to talk about that in class in order for me to understand it. Also, the proof for 8.7.5(iv) was long and I couldn't follow the algebra very well. The idea that you can use any version of the wavelets you want to reproduce the function you're looking at is very powerful. It's interesting to see such a complicated function like the Daubechies function be used. I'd like to know more about how to choose a function that works well for different types of data.

8.6, due on December 8

I'm having a hard time understanding how to build a basis out of wavelet vectors. It's not immediately obvious to me how to know what phi functions belong to each V. Honestly, this section was hard because I didn't initially understand Haar wavelets from the last section It's interesting to see that the FWT doesn't take as long as the FFT for a given input. When we practice with FWTs in the lab, I feel like I'll get better understanding of the concept then. I can see this being useful for all sorts of curve analyses when they need to be approximated but what you want to see is general trends.

8.5, due on December 6

I like understanding how it's possible to create different representations of the same function depending on what aspects of the curve we're interested in. I would be interested in doing a coding lab where we use these Haar wavelets to solve problems, so I can get a feel for how each type of decomposition is used in applications. The most difficult part of the reading was understanding the proof that sons and daughters are complements that make up the space V_j. I don't yet have a good sense for how to use the definitions of these different relatives (sons, daughters, father, mother, etc.).

8.4, due on December 4

Now I know what antialiasing is in computer graphics! It makes so much sense now. It's cool to see how something as simple as the Nyquist value can tell you so much about the parameters of a given problem. The methods for antialiasing are intuitive as well, kind of in the same way as least-square solutions were in Volume 1. It would be nice to get some intuition for why the Nyquist rate is twice the Nyquist value, and why the function becomes uniquely determined at that point and not before. Also, I'd like to run through a full example in class of how we can sample from a function, find the DFS, and perform ant-aliasing. That would help me understand the process a little better than before.

8.3, due on December 1

The proof of the commutativity of the convolution was difficult to follow, with the change of variables and all. Also, it's hard for me to grasp the relationship between the Hadamard product and the convolution. It helped to see the example of removing low frequencies by using convolutions, but I'm still totally not clear on it. I had no idea you could convolve a function with a Kronecker delta to basically produce copies of it at different locations! That's pretty neat. And after reading Example 8.3.12, it makes sense how convolutions and component-wise multiplication can be used to edit a discrete signal.

8.2, due on November 29

I'm excited to see what cool applications there are for the Fast Fourier Transform, since it seems applicable to anything having to do with sensors in a system. I imagine all sorts of uses in algorithms that need use discrete measurements to predict what's going to happen in the future for continuous systems (like self-driving cars?). Something that doesn't seem intuitive to me is how the discrete Fourier transform is the coefficients of the projection of f onto some subspace. I'd like to see some sort of visualization to better understand it. I also don't understand why the primitive nth roots of unity become zero when the exponent is not a multiple of n. Also, why is the temporal complexity of the FFT O(n log n)?