Ki Design Decisions
Raw Pointers
We decided against raw pointers because they're inherently unsafe.
&&
, ||
, !
and ===
vs. and
, or
, not
and `is
After some experience with Lua, we started to feel that symbolic operators like
&&
were easier to discern than alphabetical operators like and
.
Alphabetical operators look too much like the rest of the expression, and thus
they don't split it up as clearly, even with syntax highlighting.
The main drawback with the symbolic operators has traditionally been that it's
very easy to use the wrong one in C and C++, because they accept any
expression in a conditional clause. In those languages, it's not uncommon to
see mistakes like if (name & age) { /* stuff */ }
, which is easily seen as a
bug only if you are reading deliberately enough to realize that (name & age)
is semantically nonsensical (probably, anyway).
Ki only allows boolean expressions in conditional clauses, therefore it is free to realize the readability gains of symbolic operators without the risks of inadvertent incorrect operator use.
Allowing Increment and Decrement (++
And --
)
Many modern languages (Python, Rust, Ruby) do not support the increment and decrement operators. There are various arguments:
- The "hidden" non-atomic assignment can cause surprising problems
- Supporting both
++
and+=
or--
and-=
is redundant, and+=
/-=
are more powerful besides - Ease of parsing
Generally, we find ++
and --
to be very convenient; so convenient in fact
that the redundancy and (very slight) parsing difficulty is entirely worth it.
Further, We don't think that the assignment aspect of increment and decrement
is hidden; it's a core part of the operator and fundamental to the definition
of "increment" and "decrement". While we do agree that programmers often
assume that assignment is atomic (which leads to data race bugs), we believe
the frame of this argument is somewhat skewed. Any assignment of values larger
than a byte is non-atomic. If something concurrently modifies the assigned
value, the result is undefined. If our motivation were really to remove
non-atomic assignments, we'd have to go much, much further than simply removing
++
and --
.
We are on the fence, however, due to readability. It is relatively easy to
scan for the =
operator (or search for \s=\s
) to find assignments and
surround them with locks (yes there should probably be a better synchronization
strategy in place, but "should" doesn't matter a lot in the "Real World"); it
is less easy to find uses of both =
and ++
and --
.
For now, we are betting that the convenience of increment and decrement outweigh the hassle of additional searches when adding ad hoc synchronization after the fact.
Return as a Function
(Technically return
is a statement, along with die
, echo
, and fail
).
Originally return
was a statement without parentheses, ex:
return "hey there"
Ki does not use terminators (like ;
in C or JavaScript), nor is it whitespace
sensitive. As a result, inserting return
statements to short-circuit a
function (i.e., for debugging) would have unintended effects.
queryset = setup_a_database_query()
echo("SQL: ${queryset.query}")
return # Bail early
run_query(queryset)
This return
statement would return the value of run_query
, or cause a
compiler error if run_query
had no return value. Requiring parentheses
solves this problem.
We're currently debating leaving return
for returning nothing, and only
requiring parentheses when returning values. The inconsistency is irksome,
but probably less irksome than return()
vs. return
.
Parenthesized Block Clauses
We chose to wrap block clauses in parentheses, ex:
if (person.name == "Charlie") {
# blah blah blah
}
vs.
if person.name == "Charlie" {
# blah blah blah
}
We reasoned that the parentheses stick out a little better and make multiline clauses far more readable.
No :bool
coercion operator
One of the key features of Ki is its unambiguous boolean expressions. If types can be coerced to booleans, those expressions become much more ambiguous.
Implicit vs. Explicit self
Ki has gone back and forth on implicit vs. explicit self
. It's pretty much
the basic implicit/explicit debate: convenience vs. clarity.
There are three main places where self
is used: in method definitions, when
calling instance methods from other instance methods, and using instance
variables. We think the explicit scoping is important: Java and C++ travel up
scopes to resolve identifiers and therefore don't require a self
(or in C++'s
case, this
) for method calls and variables. In Java this isn't too confusing
because there are no global functions or variables, but in C++ the programmer
cannot tell at a glance (nor can the compiler) what scope a function call or
variable belongs to.
While being a permanent, if slight, source of confusion, our main issue with
C++'s implicit this
is that it's possible to unintentionally refer to the
wrong thing.
However, in method definitions, we think that self
is so common that the
explicitness becomes burdensome. Furthermore, languages like Python allow the
programmer to use anything in the place of self
, but this is never, ever
used (for good reason).
Rust provides us with an unusual case, which (mostly) uses self
to show
mutability. That's pretty cool, and the consistency feels very nice.
So we think the best of both worlds is:
- Explicit
self
when using methods and instance variables- i.e.
self.get_name()
orself.name
- i.e.
- Implicit
self
when defining methods - Methods that require a mutable
self
reference (self
is always a reference) must end in!
.