Posts

Showing posts from September, 2017

3.1, due on September 29

I really like the idea of adjacency matrices. They make a lot of sense, and it means that the rules that apply to matrices can be used to understand graphs. I've already gained a lot of understanding of data structures from CS 235 and from my own experience with Python, so I'm excited to gain a more rigorous understanding of the subject. The difference between directed and undirected graphs was something I'd never considered before, so in class it'd be good to go over the difference and why that's important for the theorems presented in the section.

2.8, due September 27

Something that was kind of difficult to understand was what the book meant by "for any exact power ..." I know what it's getting at now after reading the general case, but it's not explained in the proofs very clearly. Also, the proof for when b^d<a was not apparently clear in either the simple or the general case. It is interesting to be able to prove the Master Theorem, because it seems like a very simple proof! This time I could see why the lemmata were used, unlike in the proof of Sterling's Approximation. And the Master Theorem is very useful in approximating temporal complexity. I've spent about 2-4 hours per day on homework, with half of that time being spent on this class specifically. I've most gained a solid understanding of the material was in the homework, but I understand the best when I do the reading, go to class, and do the homework in a reasonable time. Then my understanding increases at each step. I'm starting to improve ho...

2.2, due on September 20

What was interesting about this section was how each of the theorems related to equivalent theorems in calculus. I thought it was interesting to see the differences there. I'm also excited to see if there's a mathematical way to recognize methods to reduce temporal complexity of algorithms using summations, something hinted at at the beginning of the section. What I still don't understand about the section is how to determine the indexes when two summations switch places. Most of the time, I see that the indexes change beginning values and I didn't quite understand the pattern for determining the new values. It would help if we talked about visualizing it in class.

2.1, due on September 18

The most difficult part of the section was the proof of the Division Theorem. There was a part where the proof explains why r<|b|, but I didn't follow why that led to a contradiction. The theorem as a whole makes sense, and I understand the other parts of the proof. The Euclidean Algorithm is pretty cool. I had never seen it before, so it's interesting to see that something so simple can give the greatest common divisor. What's more, the proof was pretty simple to write and understand.

1.5, due on September 15

What interested me the most in the reading was the far-reaching application of the Master Theorem. I've always thought recursive algorithms could get inefficient pretty quickly, but this theorem lets me figure out when a recursive algorithm is helpful and when it's not. What was the most difficult to understand was the algorithm for faster multiplication. Maybe it's because I'm reading this late at night, but I didn't see how it was any faster. The same thing goes for the faster matrix multiplication.

1.4, due on September 13

What interested me the most in the reading was learning about different algorithms used to solve the same problem. I had seen this happen before with things like merge and sort algorithms, but never with linear algebra applications. I'd like to actually understand how those algorithms work so I can see why they work better. What was the most difficult to understand was the explanation of methods to solve linear systems using matrix equations of the form Ax=b. The LU decomposition method isn't familiar to me and the explanation wasn't detailed enough for me to grasp why that would be faster than trying to find the inverse. Was I supposed to understand why solving Ly=b and Ux=y would each cost ~n^2, or just take that on faith?

1.3, due on September 11

What interested me the most in the reading was the explanation about using exponents and logarithms to make an algorithm stable. I'd like to know what the algorithm is for computing logarithms, because I wonder how that affects the speed of the overall algorithm. I bet this method has a lot of applications in making large calculations. What was the most difficult to understand was the explanation of "the standard model for floating-point arithmetic", which had a mathematical expression that included variables like precision, exponent, and an s that I didn't understand the purpose of. I really didn't understand the purpose of the expression, and Example 1.3.3 below it didn't really make things clearer. I'd like to go over this in class.

1.1-1.2, due on September 8

Difficulty I completed the reading. It's still difficult for me to see how to design values of N to show that a function is little-oh of another function. I see how it reduces the algebra later, but I have a hard time coming up with it on my own. In 1.2, the discussion of the convergence of the Gregory-Leibniz formula was hard to understand. I got lost where it began treating error at odd and even values of k. Reflection Something I thought was interesting from the text was Proposition 1.1.8. That, coupled with Example 1.1.5 (vi), made it clear to me how to understand the order of functions with respect to big and little oh. If a function is little oh to another, I feel like that means I can find a smaller function to compare it to. If the quotient of the two functions approaches infinity, I know the function grows too quickly for the function we're comparing it to. Basically, the comparison makes a lot more sense now.

Introduction, due on September 8

I am a sophomore at BYU, studying ACME. I've chosen to study linguistics as my concentration. Beyond calculus, I've taken linear algebra, differential equations, Math 290, and Math 341. I chose to study ACME because I feel like the program provides a good blend of the math that I've always loved with the applications I'm interested in. Usually, a mathematical concept doesn't excite me until I see how it's applied in a problem-solving scenario; so I appreciate when the math I study is applied in the lab the next day. One of my favorite professors taught me calculus 2, calculus 3, and differential equations. I loved how he would dive deep into the theory discussed, and spend time on several illustrative examples. I'm a visual learner, so what I most liked about his teaching style was that he always created moving diagrams in Geometer's Sketchpad that really helped me understand the principle. Something unique about me is that I love learning language...