Thursday, 29 November 2012

Really stumped

How does this happen:


provided the transition function is defined like this:


???

It seems the transition function is only defined for a start state of q0. 


so I don't understand how they got from

δ (q1, 0) if x odd 0 even 1
δ (q2, 0) if x even 0 odd 1
δ (q3, 0) if x odd 0 odd 1

(δ (q0, 0) = q1 if x even 0 even 1 makes sense)
_______________________________________________
_______________________________________________

Got it.
Look directly at the diagram, and do the step.

Is that mathematically rigorous, though?

Workthrough Thoughts

on p. 193

"Note that a language may be an infinite set; each string in the language, however, is finite."

Question:

Can you have a set that supports infinite strings? Not an infinite number of strings, but strings that are themselves infinite?

Is a string of infinite length even a reasonable concept to propose?

If a "length" is infinite, can it even reasonably be said to be a "length"?  ... more like a lack of length?
(More musings: Maybe "lack of length" is reserved for the empty string, which has no length? Though I suppose a length of 0 is indeed a length. Just a 0 length.)

-----------------------------

I know that x^2 >= 2x  so long as x >= 2, we've proved that.
But now to show that x^2 > = 2x - 1 so long as x > = 0 ?

-----

Got it!
x^2 >= 2x  is stronger than x^2 > = 2x - 1
So x^2 >= 2x -1 is already true for all x >= 2.
The only additional cases you need to consider to get from true for all x >= 2  to true for all x>= 0 are 0 and 1. These can be shown directly.




On to Chapter 7 !


(Go Natalie, Go Natalie, Go Natalie)





Wednesday, 28 November 2012

Proving recursive algorithms

Reading through the remainder of Chapter 2 (2.7 onward) in attempts to answer my below question, I wonder why the examples use an array of length 1 as the base case instead of an array of length 0.
Then I get to this:

and now I am puzzled.  That is not all the cases? a subarray of length 0 is definitely still a subarray.  Hopefully reading on will solve the mystery, else I'll have to work through it on my own.

------------------------------

Got it. Well, half of it.

The precondition was that A is a non-empty array.  So I suppose that's why the base case is an array of length 1?

Still mediumly confused though because an array of length 1 can have a subarray of length 0.

--------------------------
I suppose a "correctness specification" is the recursive analogue of an iterative loop invairant?
-------------------------

Aside :   Can you have an array of infinite length?!

Motivation:
It seems that for termination it suffices to show that a recursive call operates on a smaller section of the array each time. But if the array was infinitely long, would that actually make it end?

First Thought:
But we have the theorem that all decreasing sequences of Natural numbers are finite, so does that mean that smaller and smaller sections of the array must eventually culminate in a subarray of size 0 ? (or of size 1 if we're adhering to the strange base case specifications above?).
Is a countdown from infinity still a veritable countdown?
Does this theorem hold if there's no defined start of the sequence?
------------------------------

Big Questions

What is the equivalent of a loop invariant for recursive programs.....?


dos cosas

uno:

The previous post was incorrect. The loop invariant did not, in fact, remain unchanged. Though the loop invariant as it were was correct, it was not helpful in proving the post condition.  I struggled to derive what would be the new invariant, but after excessive trial and error I realized that I was going to have to rely on the loop invariant to prove the postcondition, so looking at the postcondition should have been my first step in trying to determine one.

and so I did!

which leads me to.....

dos:

I SUCCESSFULLY PROVED MY FIRST PROGRAM.

yeyeyeyeyeye

Virtual high fives all around. (y)

Monday, 26 November 2012

MIND WARP

my binary search does not actually return the index of the desired item in the list or negative one.

it returns

"this is where x should go if it were in this list."

as in like

If x=3, then p=1 is returned.
if x= -2, then p = -1 is returned
If x = 7, then p = 3 is returned.

WHAT KIND OF BINARY SEARCH IS THIS.

So then I'm thinking...

if something were smaller than all the elements in the list, wouldn't the index where it should go be 0 ?
because if we're using the logic that something smaller than all the elements in the list gets -1, then shouldn't something bigger than all the elements in the list get n, not n-1?

but then I go look at the actual stipulations of the post condition, namely A[0...p] <= x < A[p+1 ... n-1],

and then I work through the examples in my head, and I understand.

Now I need to redo my algorithm :(.
At least the loop invariant proof will remain unchanged.

indexicality

struggling to be consistent in talking about sub arrays.

now I understand why the examples use the pseudo notation A[i...j] instead of the A[i:j].

ugh.

Sunday, 25 November 2012

Chapter 2 and A3q4

soooo I'm reading through  Chapter 2 about loop invariants, working through proving a binary search algorithm for A3q4 and I come across something in the inductive step for such a proof that reads:


"If there is no (j + 1)-st iteration then P(j + 1) holds trivially. So, assume that there is such an iteration."

p 52 internal, p 60 of the pdf.
And I'm sitting there like if there is no (j + 1)-st iteration, why on earth are we concerned with it. Unable to figure out why this needed to be included in the proof, and knowing that Vassos probably had a good reason to put it there, I turned to piazza to ask the question.

I start writing, and then I get to the part where I'm saying -


           Do we really need to include this?

           I'm having trouble figuring out why it's necessary, especially because in the
           definition of P(i) it states -
   
           P(i) : if the loop is executed at least i times, then...


And then it hits me.  The predicate reads  "*if* the loop is executed at least i times."
So if there is no (j+1)-st iteration, then the conditional proposition that is P(j+1) is indeed vacuously true.

*lightbulb moment*

Bonus

Working through Ch.2, and the course notes pretty much do A3q4.

#winning.
I re-found the other thing in the Chapter 3 notes that I had trouble wrapping my mind around.  In deriving the closed form formula for merge sort, Vassos makes the leap from

T(n) = cn + log2(n)dn  (*)

which is defined for all n which are powers of 2, to

T(n) <= klog2(n)  (**)

as an upper bound.

now, factoring (*) I can easily obtain T(n) = (c + log2(n)d) n
...... but from there it's a stretch for me to get to (**).  What is k?

T(n) = (c + log2(n)d) n  = [c/log2(n) + d] log2(n) n

and c >= c/log2(n)  so maybe k = c + d  and

T(n) = (c + log2(n)d) n  = [c/log2(n) + d] log2(n) n  <= [c + d] log2(n) n = k log2(n) n ....?

Let's hope the course notes get to it.
--------------------
Actually, it turns out that k is going to be  2c + 4d.
I guess it's one of those things you work out during your proof, and that you can't actually know before.
Thinking back to CSC 165 that totally makes sense. In big theta (or big O or Omega) proofs, we almost always figured out what c and b were going to be as we went along.

Also, in this particular case k turned out that way because he made the step

2cn + 2dn  <= 2cn logn2(n) + 2dn log2(n)    given that n >=2.

Other steps are definitely also possible, and would have resulted in different k values.

-------------------------------------------
Difficulty 2
-------------------------------------------

now struggling with the conversion of 3.12 to 3.13, as below

I understand that by assuming a special value for n, namely that n is a power of b, we are able to collapse the floor/ceiling terms into the generic fraction. That's not hard to follow. The confusing part for me is the change in domain specification.  Why n = 1 and n> 1 now, not 1<= n < b and n <= b ?
------------------------------
got it.
The smallest possible power of b is one. Namely, b^0.  All other powers of b are > 1, and therefore the domain of n can be broken up in this way.
In this special domain of n where n can only be a power of b, there is no n s.t. 1 <= n < b other than n = 1.  The next value of n, after n = b^0 = 1, is n = b^1 = b, which falls outside of 1<= n < b.


-------------------------------------------
Difficulty 3
-------------------------------------------
not a difficulty, but a discovery.

x^(log(y))  = y(^log(x)) provided the base is shared.

Cool!
Testing it out ...
x = 3
y = 8
base = 2
x^(log(y))  = 3^(3) = 27
y(^log(x)) = [ roughly ] 8 ^ (1.58496) = [roughly] 26.9999

Very cool!

-------------------------------------------
Difficulty 4
-------------------------------------------
Lower bound part of Master's Theorem -
Why the d requirements?
Strictly positive lambda I understand. We need to avoid the 0 case, which is true but not very meaningful.
-----------------------------
okay so if d = 0, then our derived closed form of

collapses to T(n) = cn^(logb(a)) in both cases [ b/c if a = b^l then logb(a) = l].  So the d = 0 case makes sense, allowing lambda prime to be c.
hmmm..

and then I guess the other cases also follow from this.  if d != 0 then it is either <0 or >0.  We don't allow it to be < 0 because negative steps doesn't make sense. You can't undo work.
So in this context, if d != 0 then d > 0.  And so the d > 0 cases above divide into the two cases outlined by the closed form here.
Actually no, no they don't.  They split into three. The a = b^l case is evident. But the a != b^l case divides into into both a < b^l and a > b^l..
--------------------------------------------------------------------------------

Done Chapter 3.  Woohoo!

What did moma tomato say to baby tomato when he was walking slow?

Ketchup!

---- wait for it ----


aha k. But in all seriousness, time to get caught up. I'm about to read through the text book and I will post here as I encounter challenging concepts, detailing how I work my way through them. Prepare for a day of many posts!

Chapter 3

After consulting Olek in tutorial, I've come to the conclusion that indeed the issue in the below post had to do with lack of a base case from which to build.  It could have defined a function with domain n <=0 of integers, but not of natural numbers n>=0.