Thursday, 29 November 2012

Really stumped

How does this happen:


provided the transition function is defined like this:


???

It seems the transition function is only defined for a start state of q0. 


so I don't understand how they got from

δ (q1, 0) if x odd 0 even 1
δ (q2, 0) if x even 0 odd 1
δ (q3, 0) if x odd 0 odd 1

(δ (q0, 0) = q1 if x even 0 even 1 makes sense)
_______________________________________________
_______________________________________________

Got it.
Look directly at the diagram, and do the step.

Is that mathematically rigorous, though?

Workthrough Thoughts

on p. 193

"Note that a language may be an infinite set; each string in the language, however, is finite."

Question:

Can you have a set that supports infinite strings? Not an infinite number of strings, but strings that are themselves infinite?

Is a string of infinite length even a reasonable concept to propose?

If a "length" is infinite, can it even reasonably be said to be a "length"?  ... more like a lack of length?
(More musings: Maybe "lack of length" is reserved for the empty string, which has no length? Though I suppose a length of 0 is indeed a length. Just a 0 length.)

-----------------------------

I know that x^2 >= 2x  so long as x >= 2, we've proved that.
But now to show that x^2 > = 2x - 1 so long as x > = 0 ?

-----

Got it!
x^2 >= 2x  is stronger than x^2 > = 2x - 1
So x^2 >= 2x -1 is already true for all x >= 2.
The only additional cases you need to consider to get from true for all x >= 2  to true for all x>= 0 are 0 and 1. These can be shown directly.




On to Chapter 7 !


(Go Natalie, Go Natalie, Go Natalie)





Wednesday, 28 November 2012

Proving recursive algorithms

Reading through the remainder of Chapter 2 (2.7 onward) in attempts to answer my below question, I wonder why the examples use an array of length 1 as the base case instead of an array of length 0.
Then I get to this:

and now I am puzzled.  That is not all the cases? a subarray of length 0 is definitely still a subarray.  Hopefully reading on will solve the mystery, else I'll have to work through it on my own.

------------------------------

Got it. Well, half of it.

The precondition was that A is a non-empty array.  So I suppose that's why the base case is an array of length 1?

Still mediumly confused though because an array of length 1 can have a subarray of length 0.

--------------------------
I suppose a "correctness specification" is the recursive analogue of an iterative loop invairant?
-------------------------

Aside :   Can you have an array of infinite length?!

Motivation:
It seems that for termination it suffices to show that a recursive call operates on a smaller section of the array each time. But if the array was infinitely long, would that actually make it end?

First Thought:
But we have the theorem that all decreasing sequences of Natural numbers are finite, so does that mean that smaller and smaller sections of the array must eventually culminate in a subarray of size 0 ? (or of size 1 if we're adhering to the strange base case specifications above?).
Is a countdown from infinity still a veritable countdown?
Does this theorem hold if there's no defined start of the sequence?
------------------------------

Big Questions

What is the equivalent of a loop invariant for recursive programs.....?


dos cosas

uno:

The previous post was incorrect. The loop invariant did not, in fact, remain unchanged. Though the loop invariant as it were was correct, it was not helpful in proving the post condition.  I struggled to derive what would be the new invariant, but after excessive trial and error I realized that I was going to have to rely on the loop invariant to prove the postcondition, so looking at the postcondition should have been my first step in trying to determine one.

and so I did!

which leads me to.....

dos:

I SUCCESSFULLY PROVED MY FIRST PROGRAM.

yeyeyeyeyeye

Virtual high fives all around. (y)

Monday, 26 November 2012

MIND WARP

my binary search does not actually return the index of the desired item in the list or negative one.

it returns

"this is where x should go if it were in this list."

as in like

If x=3, then p=1 is returned.
if x= -2, then p = -1 is returned
If x = 7, then p = 3 is returned.

WHAT KIND OF BINARY SEARCH IS THIS.

So then I'm thinking...

if something were smaller than all the elements in the list, wouldn't the index where it should go be 0 ?
because if we're using the logic that something smaller than all the elements in the list gets -1, then shouldn't something bigger than all the elements in the list get n, not n-1?

but then I go look at the actual stipulations of the post condition, namely A[0...p] <= x < A[p+1 ... n-1],

and then I work through the examples in my head, and I understand.

Now I need to redo my algorithm :(.
At least the loop invariant proof will remain unchanged.

indexicality

struggling to be consistent in talking about sub arrays.

now I understand why the examples use the pseudo notation A[i...j] instead of the A[i:j].

ugh.

Sunday, 25 November 2012

Chapter 2 and A3q4

soooo I'm reading through  Chapter 2 about loop invariants, working through proving a binary search algorithm for A3q4 and I come across something in the inductive step for such a proof that reads:


"If there is no (j + 1)-st iteration then P(j + 1) holds trivially. So, assume that there is such an iteration."

p 52 internal, p 60 of the pdf.
And I'm sitting there like if there is no (j + 1)-st iteration, why on earth are we concerned with it. Unable to figure out why this needed to be included in the proof, and knowing that Vassos probably had a good reason to put it there, I turned to piazza to ask the question.

I start writing, and then I get to the part where I'm saying -


           Do we really need to include this?

           I'm having trouble figuring out why it's necessary, especially because in the
           definition of P(i) it states -
   
           P(i) : if the loop is executed at least i times, then...


And then it hits me.  The predicate reads  "*if* the loop is executed at least i times."
So if there is no (j+1)-st iteration, then the conditional proposition that is P(j+1) is indeed vacuously true.

*lightbulb moment*

Bonus

Working through Ch.2, and the course notes pretty much do A3q4.

#winning.
I re-found the other thing in the Chapter 3 notes that I had trouble wrapping my mind around.  In deriving the closed form formula for merge sort, Vassos makes the leap from

T(n) = cn + log2(n)dn  (*)

which is defined for all n which are powers of 2, to

T(n) <= klog2(n)  (**)

as an upper bound.

now, factoring (*) I can easily obtain T(n) = (c + log2(n)d) n
...... but from there it's a stretch for me to get to (**).  What is k?

T(n) = (c + log2(n)d) n  = [c/log2(n) + d] log2(n) n

and c >= c/log2(n)  so maybe k = c + d  and

T(n) = (c + log2(n)d) n  = [c/log2(n) + d] log2(n) n  <= [c + d] log2(n) n = k log2(n) n ....?

Let's hope the course notes get to it.
--------------------
Actually, it turns out that k is going to be  2c + 4d.
I guess it's one of those things you work out during your proof, and that you can't actually know before.
Thinking back to CSC 165 that totally makes sense. In big theta (or big O or Omega) proofs, we almost always figured out what c and b were going to be as we went along.

Also, in this particular case k turned out that way because he made the step

2cn + 2dn  <= 2cn logn2(n) + 2dn log2(n)    given that n >=2.

Other steps are definitely also possible, and would have resulted in different k values.

-------------------------------------------
Difficulty 2
-------------------------------------------

now struggling with the conversion of 3.12 to 3.13, as below

I understand that by assuming a special value for n, namely that n is a power of b, we are able to collapse the floor/ceiling terms into the generic fraction. That's not hard to follow. The confusing part for me is the change in domain specification.  Why n = 1 and n> 1 now, not 1<= n < b and n <= b ?
------------------------------
got it.
The smallest possible power of b is one. Namely, b^0.  All other powers of b are > 1, and therefore the domain of n can be broken up in this way.
In this special domain of n where n can only be a power of b, there is no n s.t. 1 <= n < b other than n = 1.  The next value of n, after n = b^0 = 1, is n = b^1 = b, which falls outside of 1<= n < b.


-------------------------------------------
Difficulty 3
-------------------------------------------
not a difficulty, but a discovery.

x^(log(y))  = y(^log(x)) provided the base is shared.

Cool!
Testing it out ...
x = 3
y = 8
base = 2
x^(log(y))  = 3^(3) = 27
y(^log(x)) = [ roughly ] 8 ^ (1.58496) = [roughly] 26.9999

Very cool!

-------------------------------------------
Difficulty 4
-------------------------------------------
Lower bound part of Master's Theorem -
Why the d requirements?
Strictly positive lambda I understand. We need to avoid the 0 case, which is true but not very meaningful.
-----------------------------
okay so if d = 0, then our derived closed form of

collapses to T(n) = cn^(logb(a)) in both cases [ b/c if a = b^l then logb(a) = l].  So the d = 0 case makes sense, allowing lambda prime to be c.
hmmm..

and then I guess the other cases also follow from this.  if d != 0 then it is either <0 or >0.  We don't allow it to be < 0 because negative steps doesn't make sense. You can't undo work.
So in this context, if d != 0 then d > 0.  And so the d > 0 cases above divide into the two cases outlined by the closed form here.
Actually no, no they don't.  They split into three. The a = b^l case is evident. But the a != b^l case divides into into both a < b^l and a > b^l..
--------------------------------------------------------------------------------

Done Chapter 3.  Woohoo!

What did moma tomato say to baby tomato when he was walking slow?

Ketchup!

---- wait for it ----


aha k. But in all seriousness, time to get caught up. I'm about to read through the text book and I will post here as I encounter challenging concepts, detailing how I work my way through them. Prepare for a day of many posts!

Chapter 3

After consulting Olek in tutorial, I've come to the conclusion that indeed the issue in the below post had to do with lack of a base case from which to build.  It could have defined a function with domain n <=0 of integers, but not of natural numbers n>=0.

Monday, 29 October 2012

Catching up on chapter 3

Reading through chapter 3 we come across this:

which is not a properly defined recursive function.  The text then goes on to say

"Superficially, the reason appears to be that it defines f'(n) in terms of the value of f' at a larger argument, f'(n+1). It is true that it does not define a function, but our reasoning as to why that is so is not adequate."

and then goes on to show how it is equivalent to this
in efforts to somehow dispel our previous faulty reasoning.
I'm struggling to see how writing the function in this latter sense with a =>  in the place of a > somehow mediates the concern that it's defined in terms of higher and not lower terms... because it still is defined in terms of higher terms.  Maybe it has to do with no proper base case?  I'm not sure I get it. I'm going to as the T.A.s

Tuesday, 23 October 2012

Sights on the Weekend

This past Monday's quiz was the first I've struggled with.  Moreover, I no longer feel like I'm on top of the content we're covering in class during lecture. I am pleased with my current standing in the course but I feel like in order to maintain this level I'm going to have to put in some serious hours.

This weekend (if I ever actually get to Friday, that is #MidtermMadness) will be dedicated among other things to catching up on the content covered since the midterm as well as making a significant dent in A2, if not finishing it.  Looking forward, I suspect that the content in the end portion of the course will be difficult for me and so I do need to solidly understand our current topics in order to be able to succeed then.

Among my planned A2 adventures is learning LaTeX, finally. Woohoo!

Relevant:

Monday, 15 October 2012

Midterm Reflections

Overall, I'm very happy with how the midterm went. I though it was fair and of an appropriate length. There was definitely some panic that settled in during the few days leading up while cdf was down but in all, I have no complaints about.

I'm eager to get it back, particularly to see how the third (Structural Induction) question was marked.  I'm confident in my answer, but I struggled to present it in what I felt was a tidy, concise format. It felt verbose.

In a similar vein,  I feel slightly awkward stating the Induction Hypothesis for complete induction. In my mind I know what it needs to be, but expressing it symbolically I usually resort to some sort of "true for all i element of {0,...,n} which doesn't seem right.  Even though it conveys the same message, I feel like we've been writing it with an inequality, and that I should be doing the same.

This last point kept getting skimmed over in my review sessions. It was just a presentation detail, and so in studying I wanted to make sure I understood key concepts first before I fine tuned their presentation.  I was thankful today in Tutorial when I got quiz 2 back with some helpful red scribbles rephrasing my poorly stated I.H..  That was exactly the kind of pointer I needed, and now I know!

Looking forwards
I missed Friday's class, but the slides tell me we've started Time Complexity.  I can't remember from 165 if I rocked at this or totally blew it .... but I remember all the key ideas so in all honestly it can't have been that bad.  I need to review those slides. I also need to go over closed form formulas.  Though I understand them when they are presented to me, I seriously struggle deriving them myself.

Wednesday, 3 October 2012

Expedience

So I'm none too pleased about not being able to hand-write our assignments.  However I'm appeasing myself with the justification that it's about time I learned how to write LaTeX anyways.  Another upside is the white-out to be saved.  Hooray, saved white-out!

I was going to do a live-blog (slog?) titled "Adventures in LaTeX" about downloading, understand and using the LyX editor, but with Thanksgiving weekend fast approaching and no spare time on the horizon I'm resigning myself to struggling through MS Word for this assignment.  Understanding LyX and the WYSIWYM paradigm will simply have to wait until A2.

I'm keeping my fingers crossed I'm not doing myself a disfavour here! Hopefully I'll be able to express myself symbolically in Word without too much hassle. I'm open to the possibility that the finicky nature of Word's typesetting functions will actually cost me more time, but I'll risk it.

As a point of interest, this particular criterion of the assignment (no hand-written work) has given me a new article idea. If any of you, dear readers, are members of New College watch out for an opinion piece in issue 3 of The Window --which should be out November 1st--  about how the prevalence and availability of technology is impacting our learning experience. I'll probably also throw something in there about the student right to submit hand-written assignments as stated by UTSU vs. the advantages of early adoption (though LaTeX isn't exactly new at this point in the game....) and how the standpoints should and do vary according to discipline. Thanks, CSC236, for an article!

Now, to go plow through a field of superscript corrections...

Monday, 1 October 2012

Adventures in LaTeX

I'm none too pleased about not being allowed to hand write our assignments. However, I'm appeasing myself with the justification that it's about time I learned how to write LaTeX anyways. Another upside is the white-out to be saved. Hooray, saved white-out!

This is going to be a live-blog (sLog?). What I mean by this, is that in a similar vein to live-tweeting and event, I will be live blogging this event.  Here we go! I'm starting at the course homepage, and clicking through to the "a1.tex" file.
-------------------
OK. So I probably should have expected as much but the .tex file just opens in my browser like a .txt file. Must find an environment....

Settling with LyX. After trying to find an online editor, and failing, and then trying see if I could export a .tex file from word or notepad or something, and failing, I've conceded to downloading a new program. After some research into what program that should be, I've settled with LyX.

Wikipedia tells me LyX is WYSIWYM, not WYSIWYG. After reading Alex's struggle with this paradigm, I'm keeping my fingers crossed that I'll just intuitively understand it.
(Side note - I love installers. So seamless! One day I'm going to learn how to make installers and it's going to be glorious.)
-------------------
I have come to the decision that it will be more fruitful to actually read the LyX intro/documentation/tutorial than it will be to simply keep my fingers crossed.  So...I'll let you guys know how that goes?

Over, and out.