-
Notifications
You must be signed in to change notification settings - Fork 0
/
meta.tex
1400 lines (1240 loc) · 70.8 KB
/
meta.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
\chapter{A Theory about Propositional Logic} \label{meta}
In Chapter \ref{theories}, we talked about how to formulate theories
in predicate logic. Now here's a weird idea: how about we formulate a
theory (in predicate logic) about propositional logic? Think of it
this way. Suppose that you travel far back in time, to a time when
human beings didn't know how to reason with quantifiers --- they only
knew how to reason with the propositional connectives ``and'', ``or'',
etc.\footnote{As far as I know, nobody believes that's how human
thinking actually evolved.} Your job is to describe the rules of
the ``game'' that these early, propositional-logical humans like to
play.
If you think that the scenario we've just described is strange or
silly, then just think of it as a warm up for the real job: looking in
the mirror, i.e.\ reasoning about how we reason. Obviously, good
human thinking isn't exhausted by propositional logic; at the very
least, it also involves inferences with quantified statements. Hence,
we'll want eventually to have a theory about predicate logic. We'll
turn to that in Chapter \ref{meta2}, and we'll restrict ourselves in
this chapter to propositional logic.
So, to return to our simplified and fictional set up: we want a theory
$T$ that describes propositional logic. To build $T$, we first need
to decide on a vocabulary, i.e.\ on relation symbols, function
symbols, names, etc.\ that can be used to describe propositional
reasoning. For this, we'll make use of set theory. We will assume
that there is a non-empty set $\Sigma$, which we'll call the
\glspl{atomic sentence}. To be clear, the things in $\Sigma$ aren't
{\it our} atomic sentences --- i.e.\ they aren't part of our language.
Instead of using the sentences in $\Sigma$, we are talking {\it about}
them. We also assume that there is another set containing the symbols
$\wedge,\vee ,\to,\neg$, and perhaps some parentheses. Again, don't
think of those symbols as the logical connectives we use; instead,
they are the logical connectives that are used by the people we are
studying. Finally, we assume that there is a basic operation on
symbols called ``concatenation.'' \label{formation}
We then notice that the people we are studying treat some strings of
symbols differently than others. We then hypothesize that there is a
feature of strings, which we might call ``sentencehood,'' and we
introduce a predicate symbol $\mathsf{sent}(\phi )$ to our language
that we will use to describe which things are sentences. (Here we're
using Greek letters, such as $\phi$, as our variables.) Here then is
our theory about the grammar of propositional logic:
\begin{itemize}
\item Any symbol in the set $\Sigma$ is a sentence.
\item For any string $\phi$ of symbols, if $\phi$ is a sentence, then
so is the string $\neg \phi$.
\item If $\phi$ and $\psi$ are sentences, then $\phi\vee\psi$ is a
sentence.
\item If $\phi$ and $\psi$ are sentences, then $\phi\wedge\psi$ is a
sentence.
\item If $\phi$ and $\psi$ are sentences, then $\phi\to\psi$ is a
sentence.
\end{itemize}
All of these axioms seem obviously correct, but they are not yet
sufficient, for they don't entail some other things we know --- for
example, that our test subjects do {\it not} count gibberish strings
as sentences. To capture that additional claim, we draw upon a little
bit of set theory to say that the set of sentences is like the set of
natural numbers, viz.\ all of its elements result from a finite number
of applications of the construction methods to the atomic sentences.
We can give a visual representation of the construction of a sentence
by means of the notion of a \emph{parse tree}. In such a tree, each
node corresponds to a formula. The initial nodes must be atomic
sentences; and new nodes can be constructed from old ones using the
propositional connectives. Thus, for example, given nodes $\phi$ and
$\psi$ we can construct new nodes as follows:
\[ \begin{tikzcd}
\neg\phi\arrow[-]{d} & & & \phi\wedge \psi \arrow[-]{dl} \arrow[-]{dr} \\
\phi & & \phi & & \psi \end{tikzcd} \]
Here's one example of a full parse tree:
\[ \begin{tikzcd}
& & \neg (P\wedge Q)\to R \arrow[-]{dl} \arrow[-]{dr} & & \\
& \neg (P\wedge Q) \arrow[-]{d} & & R & \\
& P\wedge Q \arrow[-]{dl} \arrow[-]{dr} & & & \\
P & & Q & & \end{tikzcd} \] Parse parse trees are useful in many
ways. First, a parse tree allows us to define the notion of a
\gls{subformula} of a formula $\phi$: namely, any formula that occurs
in the parse tree of $\phi$. Second, a parse tree provides a nice
visual representation of how the truth-value of a sentence $\phi$ is
computed from the truth-value of its atomic subformulas. Indeed, each
node of a parse tree can be considered as a logic circuit: a negation
node is the $\mathsf{not}$ circuit that flips a bit, a conjunction
node is the $\mathsf{and}$ circuit that gives output $1$ only if both
inputs are $1$, etc. Third, a parse tree makes it obvious what the
\gls{main connective} of a sentence is: it's the connective at the
root node of the parse tree. Finally, parse trees give a nice visual
picture of what happens during \gls{substitution}. Consider the
simple case of the substitution $F(P)=Q\to R$ and
$F(Q)=R\wedge \neg R$ applied to the sentence $\phi\equiv P\to Q$.
The parse tree of $F(\phi )$ results from simply pasting the trees for
$F(P)$ and $F(Q)$ to the nodes for $P$ and $Q$ in the parse tree of
$P\to Q$.
\[ \begin{tikzcd}
& P\to Q \arrow[-]{dl} \arrow[-]{dr} & & & F(P\to Q)\equiv (Q\to R)\to
(R\wedge\neg R) \arrow[-]{dl} \arrow[-]{dr} & \\
P & & Q & F(P) & & F(Q)
\end{tikzcd} \]
\section{Induction on the construction of sentences}
In Chapter \ref{theories}, we saw that the set $N$ of natural numbers
is defined so as to license the method of ``proof by induction.''
This method says, roughly, that if you can prove that $0$ is $\phi$,
and if you can prove that whenever $n$ is $\phi$ then so is $n+1$,
then it follows that all natural numbers are $\phi$. We will now see
that this method of proof can be adapted to the set of sentences of
propositional logic --- giving us a powerful tool for proving that
something or other is true for all sentences.
% Let $<$ be a binary relation, and let $T$ be a theory which says that
% $<$ is a linear order that has a least element, and such that for
% every element, there is a unique succeeding element. We can capture
% the latter requirement with the axiom:
% \[ \forall x\exists y\forall z((x<z)\leftrightarrow (y\leq z)) \] Of
% course, we're very familiar with structures of the sort described by
% $T$, in particular the natural numbers $0,1,2,\dots $.
% Before proceeding, let's note that the theory $T$ permits us to define
% some new concepts. In particular, $T$ says that for each element,
% there is a unique element that immediately succeeds it. To be more
% precise, consider the relation:
% \[ \theta (x,y) \: \equiv \: (x<y)\wedge \neg \exists z((x<z)\wedge
% (z<x)) .\] Then $T\vdash \forall x\exists !y\theta (x,y)$, and we can
% introduce a function symbol $s(x)$ to denote the unique $y$ such that
% $\theta (x,y)$.
% Now suppose that $p$ is a predicate symbol, consider the following sequent:
% \[ p(0) , \;\forall x(p(x)\to p(s(x))) \:\vdash\: \forall xp(x) \qquad
% (*) \] Is (*) valid? That's not an easy question to answer. But in
% fact, it is \emph{not} valid, as the following example shows: take the
% natural numbers $0,1,2,\dots $, but then add to the end of the natural
% numbers a copy of all integers, negative and positive. In other
% words, consider the ordered set $M$:
% \[ 0,1,2,\dots ,-1',0',1',\dots , \] where each ellipsis indicates an
% infinite number of succeeding elements. Let's also stipulate that the
% predicate $p$ applies to an element $x$ of $M$ just in case $x$ is a
% member of the first sequence of natural numbers. Then $T$ is true in
% $M$, and the premises of (*) are true in $M$; but the conclusion of
% (*) is \emph{false} in $M$. Therefore, the sequent (*) is not
% logically valid.
% Nonetheless, there are cases where we'll want to assume the validity
% of (*). These are cases where we've constructed a collection $S$, and
% where we want then to prove that everything in $S$ has a certain
% feature $\phi$. The most notable cases of such collections are:
% natural numbers $0,1,2,\dots $, sentences of propositional logic, and
% sequents of propositional logic.
% %% TO DO: first talk about induction on natural numbers ,e.g. $\Sum n$
% %% ...
Let $\Sigma$ be a fixed set of atomic sentences. For simplicity,
we'll first consider a simple case where $\Sigma = \{ P \}$, and where
$\Delta$ is the set of sentences that are built with the $\neg$ and $\vee$
connectives. If you think of the set of sentences on analogy to the
natural numbers $N$, then the sentence $P$ is the $0$, and the
connectives $\neg$ and $\vee$ are like the successor function. In the
case of the set $N$ of natural numbers, each number $n\in N$ is the
result of applying the successor function $s$ to $0$ some finite
number of times. In the case of the set $\Delta$ of sentences, each
sentence $\phi\in \Delta$ results from taking a certain number of copies of
$P$, and applying the connectives $\neg$ and $\vee$ a finite number of
times. This definition of the set $\Delta$ thus licenses the following
extension of the method of proof by induction.
\bigskip \begin{tcolorbox}[enhanced,width=11cm,title=Induction on the
construction of sentences,attach boxed title to top
left={yshift=-2mm,xshift=4mm},boxed title style={sharp corners}]
\[ \begin{array}{c p{6cm} p{2.2cm}}
(1) & Atomic sentences have property $X$. & base case \\
(2) & If $\phi$ and $\psi$ have property
$X$, then $\phi\vee\psi$ has
property $X$. & induction $\vee$ \\
(3) & If $\phi$ has property $X$, then
$\neg\phi$ has property $X$. & induction
$\neg$ \\ \hline
(\text{C}) & Every sentence in $\Delta$ has property
$X$. & conclusion \end{array} \] \end{tcolorbox}
\bigskip What we have here is a family of inference rules, one for
each property $X$ that can be described in our meta-theory of
propositional logic. We haven't been completely precise in telling
which properties of sentences can be articulated. However, as a
general rule, the only relevant properties are the {\it purely
syntactic} properties, e.g.\ ``the main connective of $\phi$ is $\wedge$,'' or ``$\phi$ has three left parentheses.''
We'll now use induction to show that every sentence in $\Delta$ is
provably equivalent to one of the four in the diamond:
\[ \begin{tikzcd} & \arrow[-]{dl} \top \arrow[-]{dr} & \\
P & & \neg P \\
& \arrow[-]{ul} \bot \arrow[-]{ur} & \end{tikzcd} \] Here $\top$
is shorthand for some tautology, e.g.\ $P\vee\neg P$, and $\bot$ is
shorthand for some contradiction, e.g.\ $P\wedge\neg P$, and to say
that $\phi$ and $\psi$ are \emph{provably equivalent} just means that
$\phi\dashv\vdash\psi$.
\begin{prop} Every sentence in $\Delta$ is provably equivalent to one of
the four in the diamond above.
\end{prop}
Before we start the official proof, observe that if $\phi$ and $\psi$
are provably equivalent, then so are $\neg\phi$ and $\neg\psi$.
Indeed, suppose we're given proofs $\phi\vdash\psi$ and
$\psi\vdash\phi$. Then we can obtain proofs of
$\neg\phi\vdash\neg\psi$ and $\neg\psi\vdash\neg\phi$ by using the
(derived) contrapositive rule. Similarly, if $\phi$ and $\phi '$ are
provably equivalent, and $\psi$ and $\psi '$ are provably equivalent,
then so are $\phi\vee\psi$ and $\phi '\vee\psi
'$.
\begin{exercise} Prove that if $\phi\dashv\vdash\phi '$ and
$\psi\dashv\vdash\psi '$ then
$(\phi\vee\psi )\dashv\vdash (\phi '\vee\psi ')$. \end{exercise}
\begin{proof} \fbox{base case} \, Obviously $P$ is equivalent to itself, hence it's
equivalent to one of the four sentences in the diamond.
\bigskip \noindent \fbox{induction $\neg$} \, Suppose that $\phi$ is equivalent to one of
the four in the diamond. If $\phi$ is equivalent to $\top$, then
$\neg\phi$ is equivalent to $\bot$. If $\phi$ is equivalent to
$P$, then $\neg\phi$ is equivalent to $\neg P$. Etc.
\bigskip \noindent \fbox{induction $\vee$} \, Suppose that $\phi$
and $\psi$ are each equivalent to some sentence in the diamond. If
$\phi$ is equivalent to $\bot$, then $\phi\vee\psi$ is equivalent
to $\psi$. If $\phi$ is equivalent to $\top$, then $\phi\vee\psi$
is equivalent to $\top$. Similar conclusions hold if $\psi$ is
equivalent to $\top$ or $\bot$. If $\phi$ and $\psi$ are
equivalent to each other, then $\phi\vee\psi$ is equivalent to
$\phi$. Finally, if $\phi$ is equivalent to $\neg\psi$, then
$\phi\vee\psi$ is equivalent to $\top$.
\end{proof}
We've seen how to use proof by induction to show that something is
true for every sentence in the set $\Delta$ of sentences containing only
the connectives $\vee$ and $\neg$. This method can now be extended to
the set of {\it all} sentences, only we need to add inductive steps
for the other two connectives, $\wedge$ and $\to$.
\[ \begin{array}{c p{5cm} p{2.2cm}}
(4) & If $\phi$ and $\psi$ have property $X$, then $\phi\wedge\psi$ has
property $X$. & induction $\wedge$ \\
(5) & If $\phi$ and $\psi$ have property $X$, then $\phi\to\psi$ has
property $X$. & induction $\to$ \end{array} \]
We will now use induction to prove that every sentence (whose only
atomic sentence is $P$) is equivalent to one of the four in the
diamond. We have already shown that every sentence in $\Delta$ is
equivalent to one of the four in the diamond. Thus, it will suffice
--- by the transitivity of logical equivalence --- to show that every
sentence is equivalent to a sentence in the set $\Delta$.
\begin{prop} Every sentence is provably equivalent to a sentence in
the set $\Delta$. \end{prop}
\begin{proof} Let's say that a sentence $\phi$ has property $X$ just
in case $\phi$ is provably equivalent to a sentence in the set
$\Delta$. We will use induction on the construction of formulas to
prove that every sentence has property $X$. Before we begin, note
that if $\psi$ has property $X$, and \mbox{$\phi\dashv\vdash\psi$},
then $\phi$ has property $X$.
\bigskip \noindent \fbox{base case} \, Since $P\in\Delta$, $P$ has
property $X$.
\bigskip \noindent \fbox{induction $\vee$ and $\neg$} \, If $\phi$
and $\psi$ have property $X$, then $\neg \phi$ and $\phi\vee\psi$
obviously have property $X$.
\bigskip \noindent \fbox{induction $\wedge$} \, Suppose that both
$\phi$ and $\psi$ have property $X$. Then
$\phi\wedge\psi\dashv\vdash\neg (\neg\phi\vee\neg\psi )$, and the latter
obviously has property $X$. Therefore, $\phi\wedge\psi$ has
property $X$.
\bigskip \noindent \fbox{induction $\to$} \, Suppose that both
$\phi$ and $\psi$ have property $X$. Then
$\phi\to\psi\dashv\vdash\neg\phi\vee\psi$, and the latter obviously has
property $X$. Therefore, $\phi\to\psi$ has property $X$.
\bigskip \noindent This completes the inductive steps, and so it
follows that every sentence has property $X$.
\end{proof}
\begin{exercise} Let $\Theta$ be the set of sentences whose only
atomic sentence is $P$, and whose only connectives are $\neg$ and
$\wedge$. Show that every sentence is provably equivalent to a
sentence in $\Theta$. \end{exercise}
\begin{exercise} Let $\Gamma$ be the set of formulas defined as follows:
\begin{itemize}
\item $P\in \Gamma$,
\item If $\phi\in\Gamma$ and $\psi\in\Gamma$ then
$\phi\vee\psi\in\Gamma$;
\item Every element of $\Gamma$ arises from a finite number of the
previous steps.
\end{itemize} Use mathematical induction to show that for all
$\phi\in\Gamma$, $\phi\vdash P$.
\end{exercise}
\section{Truth Functions}
In Chapter \ref{truth} we introduced truth-tables as a tool for
deciding whether arguments are valid or not. It's time now to think
more theoretically about what truth-tables are, and about what they
can do.
Each one of our connectives $\neg ,\wedge ,\vee$ and $\to$ has an
associated truth-table. Therefore, these connectives are
\emph{\gls{truth-functional}}, i.e.\ the truth-value of an output
sentence, say $\neg \phi$, is completely determined by the truth-value
of the input sentence $\phi$. Now, you might wonder: how in the world
could a connective {\it not} be truth-functional? Well, consider for
example, the connective, ``Donald Trump said that \dots ''. This
phrase is a bona fide propositional connective because for any
declarative sentence $\phi$, you can set it in the blank, and the
output is a new sentence: ``Donald Trump said that $\phi$.'' However,
even the most blind defender of Trump wouldn't want to say that this
connective is truth-functional, for there is surely at least one false
sentence $\phi$ such that ``Donald Trump said that $\phi$'' is true,
and at least one false sentence $\psi$ such that ``Donald Trump said
that $\psi$'' is false. Therefore, the connective cannot determine
the output truth-value simply on the basis of the input truth-value.
The ``Donald Trump said that \dots '' connective has not been studied
carefully by philosophers. However, there are other
non-truth-functional connectives that have been. One of philosophers'
favorites is the connective ``It is necessarily true that \dots ''.
As long as there are some truths that are not necessarily true, then
this connective is not truth-functional. And since philosophers have
long been interested in necessary truths, they have taken particular
interest in non-truth-functional connectives. They study these
connectives in a subject called \emph{modal logic}.
Our focus here, however, is on truth-functional connectives. We can
now raise the question: are there other truth-functional connectives
besides $\neg ,\vee ,\wedge ,\to$? Well, immediately we know the
answer is yes, for we also have the connective $\lra$, which doesn't
have the same truth table as any of those latter three. However, you
might be quick to point out that the truth table for $\lra$ can be
simulated by using both the $\wedge$ and $\to$ connectives. Let's
distinguish then between connectives that can be expressed in terms of
$\neg ,\vee ,\wedge ,\to$, and those that cannot be so expressed. We
can then rephrase the question as: are there any truth-functional
connectives that cannot be expressed in terms of the ones we already
have?
It might sound at first like that question is impossibly difficult to
answer. But let's start by thinking about how many possible
truth-functions there could be. (Here a \emph{truth-function} is
simply a function that takes truth-values as inputs, and returns
truth-values as outputs. Since our truth-values are $0$ and $1$, a
truth-function is a function from the set $\{ 0,1\}$ to itself.) For
this, we need to do some basic calculation. Starting with the case of
unary truth-functions (i.e.\ those that take one input), there are
precisely $4$ functions from $\{ 0,1\}$ to itself: the identity
function, the function that exchanges $0$ and $1$, the function that
maps both elements to $0$, and the function that maps both elements to
$1$. And clearly we can express those four functions in terms of
combinations of connectives, e.g.\ the constant $0$ function is
expressed by $P\wedge\neg P$.
Now, for binary truth-functions (i.e.\ those with two inputs), we
already have many more possibilities. Each function from
$\{ 0,1\}\times \{ 0,1\}$ to $\{ 0,1\}$ corresponds to a division of
the former set into two parts: those elements that get assigned $0$,
and those elements that get assigned $1$. Since the former subset is
the complement of the latter, each such function is uniquely
determined by the subset of elements to which it assigns $1$. Hence,
there is a one-to-one correspondence between binary truth-functions
and subsets of $\{ 0,1\}\times \{ 0,1\}$, i.e.\ elements of the
powerset of $\{ 0,1\}\times\{ 0,1\}$.
If a set $X$ has $|X|$ elements, then $X$ has $2^{|X|}$ subsets. In
the case at hand, $\{ 0,1\}\times \{ 0,1\}$ has $4$ elements, hence
$2^4$ subsets; consequently, there are $2^4=16$ binary
truth-functions. Thus, there are $13$ more truth-functions besides
those represented by $\wedge ,\vee$ and $\to$. It might seem, then,
we're very far indeed from being able to express all binary
truth-functions. But in fact, the opposite is true. In Figure
\ref{hasse}, we display $16$ sentences that have distinct truth tables.
%% TO DO: fix this ugly table
\begin{figure}[htbp]
\centering
\begin{tikzpicture}[source/.style={draw,thick,rounded corners,inner sep=2pt}]
\node[source] (min) at (0,-1) {$\bot$};
\node[source] (A1) at (-3,1) {$P\wedge Q$};
\node[source] (A2) at (-1,1) {$P\wedge \neg Q$};
\node[source] (A3) at (1,1) {$\neg P\wedge Q$};
\node[source] (A4) at (3,1) {$\neg P\wedge \neg Q$};
\node[source] (B1) at (-5,3) {$P$};
\node[source] (B2) at (-3,3) {$Q$};
\node[source] (B3) at (-1,3) {$P\leftrightarrow Q$};
\node[source] (B4) at (1,3) {$P\leftrightarrow \neg Q$};
\node[source] (B7) at (3,3) {$\neg Q$};
\node[source] (B8) at (5,3) {$\neg P$};
\node[source] (C1) at (-3,5) {$P\vee Q$};
\node[source] (C2) at (-1,5) {$P\vee \neg Q$};
\node[source] (C3) at (1,5) {$\neg P\vee Q$};
\node[source] (C4) at (3,5) {$\neg P\vee \neg Q$};
\node[source] (max) at (0,7) {$\top$};
\draw (min) -- (A1);
\draw (min) -- (A2);
\draw (min) -- (A3);
\draw (min) -- (A4);
\draw (A1) -- (B1);
\draw (A1) -- (B3);
\draw (A1) -- (B2);
\draw (A2) -- (B1);
\draw (A2) -- (B4);
\draw (A2) -- (B7);
\draw (A3) -- (B2);
\draw (A3) -- (B4);
\draw (A3) -- (B8);
\draw (A4) -- (B3);
\draw (A4) -- (B8);
\draw (A4) -- (B7);
\draw (B1) -- (C1);
\draw (B1) -- (C2);
\draw (B2) -- (C1);
\draw (B2) -- (C3);
\draw (B3) -- (C2);
\draw (B3) -- (C3);
\draw (B4) -- (C1);
\draw (B4) -- (C4);
\draw (B7) -- (C2);
\draw (B7) -- (C4);
\draw (B8) -- (C3);
\draw (B8) -- (C4);
\draw (C1) -- (max);
\draw (C2) -- (max);
\draw (C3) -- (max);
\draw (C4) -- (max);
\end{tikzpicture}
\caption{Horizontal rows have the same number of $1$'s. A connecting line indicates logical implication going upwards.} \label{hasse}
\end{figure}
Since we have equivalences:
\[ P\to Q \equiv \neg P\vee Q \qquad \text{and} \qquad P\wedge Q
\equiv \neg (\neg P\vee \neg Q) , \] any sentence is equivalent to a
sentence whose only connectives are $\vee$ and $\neg$, and each of the $16$ truth
functions can be expressed in terms of these connectives. Similarly,
each of these $16$ truth functions can be expressed just with $\neg$
and $\wedge$. If every truth-function
can be expressed in terms of a set $\Gamma$ of connectives, then we
say that $\Gamma$ is \emph{\gls{expressively complete}} or
\emph{expressively adequate}. Thus, we just sketched proofs that
$\{ \neg ,\vee\}$ and $\{ \neg ,\wedge\}$ are expressively complete.
Amazingly, there is a single truth-function that is itself
expressively complete. The corresponding connective is called
``nand'', and is usually symbolized by $\uparrow$. The truth-table
for $P \uparrow Q$ is, by definition, the same as that for
$\neg (P\wedge Q)$.
\[ \begin{array}{ c@{\, }c | c@{\, }c@{\, }c }
P & Q & P & \uparrow & Q \\ \hline
1 & 1 & 1 & \mathbf{0} & 1 \\
1 & 0 & 1 & \mathbf{1} & 0 \\
0 & 1 & 0 & \mathbf{1} & 1 \\
0 & 0 & 0 & \mathbf{1} & 0 \\
\end{array} \]
To show that the set $\{ \uparrow \}$ is expressively complete, it will suffice
to show that it can reproduce the truth tables for $\neg$ and $\vee$.
After some trial and error, we find that the following definitions work:
\[ \begin{array}{c c c c } \begin{array}{ c | c@{\, }c@{\, }c}
P & P & \uparrow & P \\
\hline
1 & 1 & \mathbf{0} & 1 \\
0 & 0 & \mathbf{1} & 0 \end{array}
& & & \begin{array}{ c@{\, }c | c@{ }c@{\, }c@{\, }c@{ }c@{\, }c@{\, }c@{ }c@{\, }c@{\, }c@{ }c@{ }c@{ }c}
P & Q & ( & P & \uparrow & P & ) & \uparrow & ( & Q & \uparrow & Q & ) & \\
\hline
1 & 1 & & 1 & 0 & 1 & & \mathbf{1} & & 1 & 0 & 1 & & \\
1 & 0 & & 1 & 0 & 1 & & \mathbf{1} & & 0 & 1 & 0 & & \\
0 & 1 & & 0 & 1 & 0 & & \mathbf{1} & & 1 & 0 & 1 & & \\
0 & 0 & & 0 & 1 & 0 & & \mathbf{0} & & 0 & 1 & 0 & & \\
\end{array} \end{array} \]
\begin{exercise} Give a formula using only $P,Q$ and $\uparrow$ that
has the same truth-table as $P\wedge Q$. \end{exercise} If any
truth-function can be expressed by the $\uparrow$ connective, then, why, you might wonder, don't we
use it, instead of the redundant collection $\{ \neg ,\vee ,\wedge
,\to \}$ of four connectives? The answer, in short, is that we
face a trade off between simplicity and naturality, where the latter
is a function of what we are familiar with. For most of us, it's
fairly natural to reason in terms of ``and'', and less natural to
reason in terms of ``nand''. Why that is, we don't pretend to know.
Nonetheless, you now know that if somebody has the concept of
``nand'', then they can express any other truth-functional concept.
It can be more difficult to show that a set of connectives is {\it
not} expressively complete. For example, suppose that we want to
show that the set $\{ \vee \}$ is not, by itself, truth-functionally
complete. The way we would approach this is to start trying to
express some of the truth-tables, to get a feeling of what we {\it
cannot} do. With the connective $\vee$, we can write $P\vee P$,
which is equivalent to $P$ again. We can also write $P\vee Q$. But
it seems that we get stuck at that point. If we write longer
disjunctions, say $P\vee (P\vee Q)$, we quickly realize that we
haven't expressed anything new. That is, we get stuck with
truth-functions that are true whenever $P$ and $Q$ are true. That
realization then gives us an idea: perhaps we can prove that every
truth-function that can be expressed with just $\vee$ has this
feature, i.e.\ that it's true whenever $P$ and $Q$ are true. That
idea provides the intuition behind the following proof.
To make this proof a bit more clear, we need a slight change of
terminology. Instead of talking about a row of a truth-table, let's
talk about a \emph{valuation}. To be precise, a valuation $v$
assigns each atomic sentence (in this case, $P$ and $Q$) either $0$
or $1$. If we follow the truth-table recipe, then a valuation
naturally extends to assign every sentence either $0$ or $1$. For
example, if $v(\phi )=1$ and $v(\psi )=1$ then $v(\phi\vee\psi
)=1$. The claim above, then, is that if $v$ is a valuation such
that $v(P)=1$ and $v(Q)=1$, then $v(\phi )=1$ for any sentence whose
only connective is $\vee$.
\begin{prop} The set $\{ \vee\}$ is not truth-functionally
complete. \end{prop}
\begin{proof} Let $\Gamma$ be the set of sentences built only with the
connective $\vee$. Let $v$ be the valuation that assigns $1$ to
every atomic sentence. We show that for every sentence
$\phi\in\Gamma$, $v(\phi )=1$. Our argument proceeds by induction
on the construction of $\Gamma$. By definition, for $\phi$ an
atomic sentence in $\Gamma$, we have $v(\phi )=1$. Furthermore, if
$v(\phi )=1$ and $\phi (\psi )=1$ then $v(\phi\vee\psi )=1$.
Therefore, $v(\phi )=1$ for all $\phi\in \Gamma$. It follows that
there is no sentence in $\Gamma$ that is logically equivalent to
$P\wedge\neg P$, and $\{ \vee \}$ is not truth-functionally
complete.
\end{proof}
\begin{exercise}
Show that the set $\{ \wedge \}$ is not truth-functionally
complete. \end{exercise}
\begin{exercise} Consider sentences built out of the atomic sentences $P$ and
$Q$. In this case, there are four valuations $v_1,v_2,v_3,v_4$.
Let's say that a sentence $\phi$ is a {\it even} just in case
$\phi$ is true for either $0,2$, or $4$ valuations. Show that if
$\phi$ is even then $\neg \phi$ is even. Show that if $\phi$ and
$\psi$ are even, then $\phi\lra\psi$ is even. \end{exercise}
\begin{exercise} Is the set $\{ \neg ,\lra \}$ truth-functionally
complete? \end{exercise}
%% TO DO: draw sentences as subsets of the space of valuations
%% TO DO: prove $\vdash (P\lra Q)\vee (P\lra R)\vee (P\lra R)
%% TO DO: show that $P\lra (P\lra Q)\equiv Q$
%% TO DO -- define substitution instances
\section{A theory of what can be proven}
In the previous sections we developed a theory about the grammar and
semantics of propositional logic. In this section we develop a theory
about proofs, i.e.\ about what can and cannot be proven with the
inference rules for propositional logic. To this end, we introduce
into {\it our} language --- i.e.\ the language we are using to
describe propositional logic --- a relation symbol $\vdash$, which we
write in infix notation, taking an argument on the left and an
argument on the right. (Technically, the symbol $\vdash$ ambiguously
represents several different relation symbols, one for each finite
number of sentences that can occur on the left. But we'll brush this
complication under the rug.)
The inference rules for propositional logic provide an \emph{inductive
definition} of the set of valid sequents, i.e.\ of the extension of
the relation $\vdash$. \index{definition!inductive} The base case
here is the rule of assumptions: for any formula $\phi$, the sequent
$\phi\vdash\phi$ is valid. Each of the other inference rules is a
recipe for constructing a new valid sequent from one, two, or three
old ones. Since the extension of $\vdash$ is defined inductively, it
follows that we can prove things about all sequents by means of an
induction schema. Here the induction schema looks like this:
\bigskip \begin{tcolorbox}[enhanced,width=12cm,title=Induction on the
construction of sequents,attach boxed title to top
left={yshift=-2mm,xshift=4mm},boxed title style={sharp corners}]
$ \begin{array}{c p{7cm} p{2.3cm}}
(1) & $\phi\vdash\phi$ has property $X$. & base \\
(2) & If $\Gamma\vdash \phi\to\psi$ and
$\Gamma\vdash\phi$ have property
$X$, \newline then $\Gamma\vdash\psi$ has property $X$. &
induction MP \\
\vdots \\ \hline
(\text{C}) & All sequents have property
$X$. & conclusion \end{array} $ \end{tcolorbox} As
you can see, a proof by induction on the construction of
sequents will involve {\it many} inductive steps --- one
for each inference rule. But what kinds of things might
we want to show about the collection of all sequents?
There are two things about the collection of sequents
that we've already assumed and used. First, we assumed
that sequents couldn't be messed up by find-and-replace
operations. That is, for any sequent
$\phi\vdash\psi$, if you perform a uniform substitution
of sentences for propositional constants, then you get
another valid sequent $F(\phi )\vdash F(\psi )$. Second, we assumed that truth-tables can detect when a sequent cannot be proven. In other words, we assumed that for any valid sequent $\phi\vdash\psi$, the truth table for $\phi$ and $\psi$ will have no row in which $\phi$ is true and $\psi$ is false. It's time now to prove that these two assumptions are true.
In order to set up these proofs, it will be helpful to imagine two
different languages, with different atomic sentences. Let $\Sigma$ be
one list of atomic sentences, and let $\Sigma '$ be another list of
atomic sentences. We call $\Sigma$ the \emph{\gls{signature}}
\index{signature} of the one language, whereas $\Sigma '$ is the
signature of the second language. (If it helps you to remember, you
could think of $\Sigma '$ as having atomic sentences that have a prime
symbol, say $P',Q',R'$.) We then let both languages build their
sentences from their respective atomic sentences using the logical
connectives $\neg ,\wedge ,\vee ,\to$.
Now let's imagine what might count as a \emph{\gls{translation}}
\index{translation} between languages $\Sigma$ and $\Sigma '$. Keep
in mind that a good translation between languages need not be
word-for-word. For example, it's doubtful that there is a single
English word that could translate the German word {\it Zeitgeist}, or
a single English word that could translate the Danish word {\it
hygge}. (I've hear that there are many more interesting examples
from languages such as Hindi, Urdu, and Mandarin.) So, we shouldn't
require that a translation from $\Sigma$ to $\Sigma '$ has to match an
atomic sentence in $\Sigma$ with an atomic sentence in $\Sigma '$.
Instead, we will allow that an atomic sentence in $\Sigma$ be
reconstrued as {\it any} sentence of $\Sigma '$.
\begin{defn} Let $\Sigma$ and $\Sigma '$ be propositional signatures.
A \emph{\gls{reconstrual}} $F$ from $\Sigma$ to $\Sigma '$ is a
function that takes atomic sentences of $\Sigma$ to sentences of
$\Sigma '$. \end{defn}
A reconstrual $F:\Sigma\to\Sigma '$ extends naturally to a function
from {\it all} sentences of $\Sigma$ to sentences of $\Sigma '$. We define:
\[ \begin{array}{r c l}
F(\neg \phi ) & = & \neg F(\phi ) , \\
F(\phi\wedge\psi ) & = & F(\phi )\wedge F(\psi ) , \\
F(\phi\vee\psi ) & = & F(\phi )\vee F(\psi ) ,\\
F(\phi\to\psi ) & = & F(\phi )\to F(\psi ) .\end{array} \] To get
a feel for how this extension works, lets look at a specific example. Suppose that $\Sigma = \{ P,Q\}$, $\Sigma '=\{ R,S\}$, and that we define the reconstrual $F$ by $F(P)=R\wedge S$ and $F(Q)=\neg S$. Then
\[ F(\neg P\vee Q ) \: = \: \neg F(P)\vee F(Q) =\neg (R\wedge S)\vee \neg S
. \]
While a translation $F$ can run between distinct languages, it can
also run from a language back to itself. This kind of thing happens,
in fact, quite frequently in the sciences. For example, a few decades
ago, some clever economists figured out that some of the differential
equations of physics could be applied to financial markets. Their
proposal amounted to a translation from the language of physics into
the language of economics --- and since are both part of our total
language, a translation from our language back into itself.
Within propositional logic, we can use this notion of translating a
language into itself to make sense of the idea of a substitution
instance of a sentence. In short, a substitution instance of a
sentence $\phi$ is any sentence to which $\phi$ could be translated.
(It can be helpful here to continue thinking of translations to other
languages. In that case, a sentence in our language can have
substitution instances in many other different languages.)
\begin{defn} A \gls{substitution instance} of $\phi$ is any sentence
of the form $F(\phi )$, where $F:\Sigma\to\Sigma '$ is a
reconstrual. \end{defn} The key idea here is that a substitution
instance of a sentence has the {\it same form} as that sentence. One
confirmation that we've got the notion right is that any substitution
instance of a tautology is still a tautology. (Here we are using
``tautology'' in its semantic sense: a sentence that is true relative
to every valuation.)
\begin{prop} If $\phi$ is a tautology, then any substitution instance
of $\phi$ is also a tautology. \end{prop}
\begin{proof} Suppose that $\phi$ is a tautology and that $F(\phi )$
is a substitution instance of $\phi$. Let $v$ be an arbitrary
valuation. We need to show that $v(F(\phi ))=1$. Consider the
valuation $w$ defined by $w(P)=v(F(P))$ for each atomic sentence
$P$. Since $\phi$ is a tautology, $w(\phi )=1$. In addition, since
$w$ and $v\circ F$ agree on atomic sentences, and both commute with
all the sentence connectives, it follows that $w=v\circ F$.
Therefore, $v(F(\phi ))=(v\circ F)(\phi )=w(\phi )=1$. Since $v$
was an arbitrary valuation, it follows that $F(\phi )$ is a
tautology. \end{proof}
\begin{prop} If $\phi$ is a contingent sentence, then $\phi$ has a
tautologous substitution instance. \end{prop}
\begin{proof} Suppose that $\phi$ is contingent, and let
$P_0,\dots ,P_n$ be a list of the atomic sentences that occur in
$\phi$. Since $\phi$ is contingent, there is a valuation $v$ such
that $v(\phi )=1$. Let $F$ be an arbitrary contradiction, and let
$T$ be an arbitrary tautology. For $i=0,\dots ,n$, define
$f(P_i)=T$ if $v(P_i)=1$, and $f(P_i)=F$ if $v(P_i)=0$. We claim
then that $f(\phi )$ is a tautology. Let $w$ be an arbitrary
valuation. For each atomic sentence $P_i$, if $v(P_i)=1$ then
$v(P_i)=w(T)=w(f(P_i))$; and if $v(P_i)=0$ then
$v(P_i)=w(F)=w(f(P_i))$. Thus, $v$ and $w\circ f$ agree on all
atomic sentences. Since $v$ and $w\circ f$ are truth-functional,
they agree on all sentences, hence on $\phi$. Therefore
$w(f(\phi ))=v(\phi )=1$. Since $w$ was an arbitrary valuation,
$f(\phi )$ is a tautology. \end{proof}
\begin{exercise} Give a tautologous substitution instance
of the sentence $P\to (Q\wedge R)$. \end{exercise}
\begin{exercise} Follow the outlines of the previous proof to show
that if $\phi$ is a contingent sentence, then $\phi$ has an
inconsistent substitution instance. \end{exercise}
We're now ready for the main show. The substitution meta-rule says
that if you take a valid proof and perform uniform substitution, then
the result is still a valid proof. We now prove that fact.
\begin{subthm} Let $F:\Sigma\to\Sigma '$ be a reconstrual. If
$\phi _1,\dots ,\phi _n\vdash \psi$ then
$F(\phi _1),\dots ,F(\phi _n)\vdash F(\psi )$. \end{subthm} \index{substitution}
\begin{proof} We prove the result by induction on the construction of
sequents. (We will prove the inductive steps for $\wedge$I, CP, and
$\vee$E, and leave the other steps to the reader.)
\bigskip \noindent \fbox{base case} \, The rule of assumptions gives
not only $\phi\vdash\phi$, but also $F(\phi )\vdash F(\phi )$.
\bigskip \noindent \fbox{induction $\wedge$I} \, Suppose that
$\phi _1,\dots ,\phi _n,\psi _1,\dots ,\psi _n\:\vdash\: \phi\wedge
\psi$ results from an application of $\wedge$I to
$\phi _1,\dots ,\phi _n\vdash\phi$ and
$\psi _1,\dots ,\psi _n\vdash \psi$, and suppose that the result
holds for these latter two sequents. That is,
$F(\phi _1),\dots ,F(\phi _n)\vdash F(\phi )$ and
$F(\psi _1),\dots ,F(\psi _n)\vdash F(\psi )$. By conjunction
introduction, we have
\[ F(\phi _1),\dots ,F(\phi _n),F(\psi _1),\dots
,F(\psi _n)\:\vdash\: F(\phi )\wedge F(\psi ) .\] Since
$F(\phi )\wedge F(\psi )=F(\phi\wedge\psi)$, it follows that
\[ F(\phi _1),\dots ,F(\phi _n),F(\psi _1),\dots ,F(\psi
_n)\:\vdash\: F(\phi \wedge \psi ) .\]
\bigskip \noindent \fbox{induction CP} \, Suppose that
$\phi _1,\dots ,\phi _n\vdash \psi\to \chi$ is derived by CP from
$\phi _1,\dots ,\phi _n,\psi\vdash \chi$. Now assume that the
result holds for the latter sequent, i.e.
$F(\phi _1),\dots ,F(\phi _n),F(\psi )\vdash F(\chi )$. Then CP
yields
\[ F(\phi _1),\dots ,F(\phi _n)\:\vdash\: F(\psi )\to F(\chi ) .\]
Since $F(\psi )\to F(\chi )=F(\psi\to\chi )$, it follows that
\[ F(\phi _1),\dots ,F(\phi _n)\:\vdash\: F(\psi\to\chi ) .\]
\bigskip \noindent \fbox{induction RAA} \, Suppose that
$\phi _1,\dots ,\phi _n\vdash\neg \psi$ is derived by RAA from
$\phi _1,\dots ,\phi _n,\psi\vdash\chi\wedge\neg\chi$, and assume
that the result holds for the latter sequent, i.e.
$F(\phi _1),\dots ,F(\phi _n),F(\psi )\vdash F(\chi\wedge\neg\chi
)$. By the properties of $F$,
$F(\chi\wedge\neg\chi )=F(\chi )\wedge\neg F(\chi )$, which is also
a contradiction. Thus, RAA yields
$F(\phi _1),\dots ,F(\phi _n)\vdash \neg F(\psi )$. Since
$\neg F(\psi )=F(\neg \psi)$, it follows that
$F(\phi _1),\dots ,F(\phi _n)\:\vdash\: F(\neg \psi )$, which is
what we wanted to prove.
\bigskip \noindent \fbox{induction $\vee$E} \, Suppose that
$\phi ,\psi _1,\psi _2\vdash \chi$ results from an application of
$\vee$E to the following three sequents:
\[ \phi\vdash \theta _1\vee\theta _2 \qquad \psi _1,\theta _1\vdash
\chi \qquad \psi _2,\theta _2\vdash \psi \] and assume that the
result holds for the latter three sequents, that is
\[ F(\phi )\vdash F(\theta _1\vee\theta _2) \qquad F(\psi _1
),F(\theta _1 )\vdash F(\chi ) \qquad F(\psi _2),F(\theta
_2)\vdash F(\psi ) \] Since
$F(\theta _1\vee\theta _2)=F(\theta _1)\vee F(\theta _2)$, an
application of $\vee$E yields
\[ F(\phi ),F(\psi _1),F(\psi _2)\:\vdash\: F(\chi ) .\]
\bigskip \noindent As mentioned before, we leave the remaining step to
the interested reader. With those steps completed, it follows that
for any sentences $\phi _1,\dots ,\phi _n,\psi$, if
$\phi _1,\dots ,\phi _n\vdash\psi$ then
$F(\phi _1),\dots ,F(\phi _n)\vdash F(\psi )$.
\end{proof}
The previous proposition immediately yields the following corollary.
\begin{prop} If $\phi$ is provable, then any substitution instance of
$\phi$ is also provable. \end{prop}
We're now ready to prove the \gls{sound} of the inference rules for
propositional logic. The proof is essentially another version of the
proof of the substitution theorem, except that we map sentences to the
numbers $0$ and $1$ instead of to other sentences.
\begin{sothm} For any valuation $v$, if
$\phi _1,\dots ,\phi _n\vdash\psi$ then
$\min \{ v(\phi _1),\dots ,v(\phi _n) \}\leq v(\psi )$. \end{sothm}
In the statement of the theorem, ``$\min$'' treats the premises
$\phi _1,\dots ,\phi _n$ as forming a conjunction: the minimum of
$v_1(\phi _1),\dots ,v(\phi _n)$ is the same as
$v(\phi _1\wedge\cdots\wedge \phi _n)$.
\begin{proof} The proof is by induction on the construction of
sequents. We will just show a couple of cases, and leave the others
to the reader.
\bigskip\noindent \fbox{induction MP} \, Suppose that $\phi _1,\phi _2\vdash\chi$ results
from MP applied to $\phi _1\vdash \psi\to\chi$ and $\phi
_2\vdash\psi$. Suppose also that $v(\phi _1)\leq v(\psi\to\chi )$
and $v(\psi _1)\leq v(\psi )$. If $\min \{ v(\phi _1),v(\phi _2)
\}=0$, then we're done. If $\min \{ v(\phi _1),v(\phi _2) \}=1$,
then $v(\psi\to \chi )=1$ and $v(\psi )=1$, from which it follows
that $v(\chi )=1$. In either case, $\min \{ v(\phi _1),v(\phi
_2)\}\leq v(\chi )$.
\bigskip\noindent \fbox{induction CP} \, Suppose that
$\phi \vdash \psi\to \chi$ is derived by CP from
$\phi ,\psi\vdash \chi$, and assume that the result holds for the
latter sequent, i.e.
$\min \{ v(\phi ),v(\psi ) \} \:\leq\: v(\chi )$. Either
$v(\psi )=0$ or $v(\psi )=1$. If $v(\psi )=0$ then
$v(\psi\to\chi )=1$ and we're done. If $v(\psi )=1$ then the above
gives $v(\chi )=1$, and hence $v(\psi\to\chi )=1$. In either case,
$v(\phi )\leq v(\psi\to\chi )$.
\end{proof}
This fulfills the promise we made in Chapter \ref{truth} that the
truth table method provides a reliable test for when a sequent cannot
be proven. In particular, if there is a row in the truth table where
$\phi$ is assigned $1$ and $\psi$ is assigned $0$, then the sequent
$\phi\vdash\psi$ cannot be proven. Similarly, if there is a row in
the truth table where $\psi$ is assigned $0$, then $\vdash\psi$ cannot
be proven.
\begin{exercise} Prove the steps of the soundness theorem for the
$\vee$I and $\vee$E rules. \end{exercise}
\begin{exercise} Show that if $\phi$ is inconsistent, then any
substitution instance of $\phi$ is inconsistent. Here we mean
``inconsistent'' in the semantic sense of being assigned $0$ by all
valuations. \end{exercise}
\begin{exercise} Show that if $\phi$ is a contingent sentence, then $\phi$ has a
contradictory substitution instance. \end{exercise}
\section{Disjunctive Normal Form}
What we want to do next is to prove the \gls{complete} theorem: if an
argument is truth-preserving, then it can be proven with our inference
rules. That's actually a non-trivial theorem that was only discovered
in the early twentieth century. So, to get there, we're going to need
to do some work. We'll start with something that might seem to be
nothing but boring syntactic bookkeeping. We define a particular kind
of form a sentence can take. Then we show that every sentence is
provably equivalent to one with this particular kind of
form.\footnote{It's at this point where it becomes crucial that we
have {\it enough} inference rules. If we didn't have enough rules
then not every sentence would be provably equivalent to one in this
form.} It turns out, however, that this fact is quite useful --- in
particular, because sentences in this form wear their inferential
relations on their sleeves. (It's as if these sentences have an
address that shows where they live in logical space.)
\begin{defn} A sentence $\phi$ is in \gls{dnf} just in case it is a disjunction of
conjunctions of literals (atomic and negated atomic
sentences). \end{defn}
For example, the following sentences are DNF:
\[ P \qquad P\wedge \neg Q \qquad (P\wedge \neg Q)\vee (\neg P\wedge
Q) \]
We can define the family of DNF formulas inductively as follows:
\begin{itemize}
\item All conjunctions of literals (atomic and negated atomic sentences) are DNF.
\item All disjunctions of DNF formulas are DNF.
\end{itemize}
The disjunctive normal form theorem will show that every propositional
logic sentence is provably equivalent to a DNF sentence. That result
can be proven in a couple of different ways, and it's useful in a
couple of different ways. On the one hand, one can prove the DNF
theorem directly by establishing a bunch of sequents, and then using
mathematical induction to generalize to all sentences. In that case,
the DNF theorem is a useful as a step along the way to proving the
completeness theorem for propositional logic. On the other hand, if
one has established the completeness theorem by some other means, then
it gives a quick proof of the DNF theorem. In what follows, we'll
first sketch the argument from completeness to the DNF theorem. Then
we'll sketch a direct proof of the DNF theorem.
Suppose first that if two sentences have the same truth table, then
they are provably equivalent. (This supposition is a direct
consequence of the completeness theorem.) Now given a sentence
$\phi$, we will construct a DNF sentence $\phi '$ from the truth-table
for $\phi$. If $\phi$ is always false, i.e.\ it has $0$ in all rows
of the main column, then let $\phi '$ be the sentence
$P\wedge \neg P$. Otherwise, for each row $i$ in which $\phi$ is
true, let $\psi _i$ be the conjunction of all the atomic sentences
that are assigned true in that row, along with the negations of all
the atomic sentences that are assigned false in that row. Note that
$\psi _i$ is true on row $i$ of the truth table, and false on every
row $j\neq i$. Finally, let $\phi '$ be the disjunction of all the
$\psi _i$ that we have just constructed.
It's obvious that $\phi '$ is DNF. So we just need to show that
$\phi$ and $\phi '$ have the same truth table. Suppose that we set
them side by side, and consider row $i$ of the truth table. If $\phi$
is true on row $i$, then one of the disjuncts of $\phi '$ is
$\psi _i$, which is true on row $i$. Hence $\phi '$ is true on row
$i$. Conversely, if $\phi '$ is true on row $i$, then the disjunct
$\psi _i$ appears in it; and by construction $\phi$ is true on row
$i$. Thus, we have shown that $\phi$ and $\phi '$ have the same truth
table. If we assume completeness, it follows that $\phi$ and $\phi '$
are provably equivalent.
This somewhat abstract argument can be illustrated by means of an
example. Consider the sentence $\phi\equiv (P\to R)\to (Q\wedge R)$.
If you write out the full truth table of $\phi$, you'll find that it's
true in rows 1,2,4, and 5, and false in all other rows. Thus, the
above recipe yields the sentence
\[ (P\wedge Q\wedge R)\vee (P\wedge Q\wedge\neg R)\vee (P\wedge\neg
Q\wedge \neg R)\vee (\neg P\wedge Q\wedge R) \] It's pretty easy to
see here that $\phi '\vdash \phi$. If one performed a big disjunction
elimination, then the first and fourth disjuncts immediately yield
$Q\wedge R$, and positive paradox yields $\phi$. The second and third
disjuncts yield $P\wedge\neg R$, hence $\neg (P\to R)$, and then
$\phi$ by negative paradox. Since $\phi '$ is a disjunction, it's a
bit more difficult to see that $\phi\vdash \phi '$. However, you'll
recall that $\phi$ entails $\neg (P\to R)\vee (Q\wedge R)$, which in
turn entails $(P\wedge\neg R)\vee (Q\wedge R)$. This last formula is
actually in DNF; and although not strictly identical to $\phi '$, it's
not hard to see how it's related to $\phi '$.
Making liberal use of known equivalences, we can rewrite $\phi '$ as
\[ \begin{array}{l} (P\wedge \neg R\wedge Q)\vee (P\wedge \neg R\wedge
\neg Q)\vee
(Q\wedge R\wedge P)\vee (Q\wedge R\wedge \neg P) \\
\dashv\vdash [(P\wedge \neg R)\wedge (Q\vee \neg Q)]\vee
[(Q\wedge
R)\wedge (P\vee \neg P)] \\
\dashv\vdash (P\wedge \neg R)\vee (Q\wedge R) \end{array} \] This
example shows that DNF equivalent forms are not generally unique.
We now turn to the argument for the DNF theorem. Actually, it's
easiest to prove something stronger, for which we need another
definition.
\begin{defn} A sentence is in \gls{cnf} just in case it's a
conjunction of disjunctions of literals. \end{defn}
\begin{dnf} Every sentence $\phi$ is provably equivalent to a sentence
$\phi ^d$ in disjunctive normal form, and to a sentence $\phi ^c$ in
conjunctive normal form. \end{dnf}
\begin{proof} We argue by induction on the construction of sentences.
\begin{itemize}
\item Each atomic sentence is both in DNF and in CNF.
\item Suppose that $\phi$ is equivalent to $\phi ^c$ and to $\phi ^d$.
Then $\neg \phi$ is equivalent to $\neg \phi ^d$, which has the form
$\neg (\psi _1\vee\cdots\vee\psi _n)$. By DeMorgan's rule, the
latter is equivalent to
$\neg \psi _1\wedge \cdots \wedge\neg \psi _n$. By another
application of DeMorgan's, each $\neg \psi _i$ is equivalent to a
disjunction of literals. Putting everything together,
$\neg \phi ^d$ is equivalent to a CNF sentence. A similar argument
shows that $\neg \phi$ is equivalent to $\neg \phi ^c$, which is
equivalent to a DNF sentence. Therefore, $\neg \phi$ is equivalent
to sentences in both CNF and DNF.
\item For disjunction and conjunction, we first note that $\vee$
trivially preserves the family of DNF sentences, and $\wedge$
trivially preserves the family of CNF sentences. Thus, if $\phi$
and $\psi$ are sentences that satisfy the hypothesis of the theorem,
then $\phi\vee\psi$ is equivalent to a DNF sentence. It's also
equivalent to $\neg (\neg \phi\wedge\neg\psi )$, which is equivalent
to a CNF sentence. A similar argument shows that $\phi\wedge\psi$
is equivalent to DNF and CNF sentences.
\item For conditionals, we have
$\phi\to\psi\dashv\vdash\neg \phi\vee\psi$. By the previous two
steps, if $\phi$ and $\psi$ satisfy the hypotheses of the theorem,
so does $\neg\phi\vee\psi$.
\end{itemize} \end{proof}
\begin{exercise} Go back through the proof of the DNF theorem and
identify each provable equivalence that was cited. Now identify
which inference rules are needed to prove those equivalences. Are
any of the primitive rules of inference not needed for the proof to
go through? \end{exercise}
What use is it that every sentence is equivalence to a DNF sentence?
For one, it gives us a quick way of understanding all the different
possible logical relations between sentences. Consider, for example,
the case of all sentences whose only atomic sentence is $P$. In this
case, the only literals are $P$ and $\neg P$, and every elementary
conjunction is equivalent to $P$, $\neg P$, or $P\wedge\neg P$.
Consequently, the DNF formulas are equivalent to
$P,\neg P,P\wedge\neg P$, or $P\vee\neg P$. That is, every sentence
is logically equivalent to one of these four sentences.
Furthermore, if two sentences $\phi,\psi$ are in DNF, then it can be
quite easy to see whether or not $\phi\vdash\psi$. In short,
$\phi\vdash\psi$ just in case $\phi$ contains a conjunction which is
as long as some conjunction in $\psi$. Consider, for example, the two
DNF formulas: