Microkernel thing OS experiment (Zig ⚡)

Initial dump

A lot of this was imported from the separate (private) repositories
I didn't really get a chance to work on this for the past few months

pci.express ced51a0f

verified
+4
.gitignore
···
+
zig-out
+
.DS_Store
+
.zig-cache
+
blobs
+675
LICENSE
···
+
GNU GENERAL PUBLIC LICENSE
+
Version 3, 29 June 2007
+
+
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
+
Everyone is permitted to copy and distribute verbatim copies
+
of this license document, but changing it is not allowed.
+
+
Preamble
+
+
The GNU General Public License is a free, copyleft license for
+
software and other kinds of works.
+
+
The licenses for most software and other practical works are designed
+
to take away your freedom to share and change the works. By contrast,
+
the GNU General Public License is intended to guarantee your freedom to
+
share and change all versions of a program--to make sure it remains free
+
software for all its users. We, the Free Software Foundation, use the
+
GNU General Public License for most of our software; it applies also to
+
any other work released this way by its authors. You can apply it to
+
your programs, too.
+
+
When we speak of free software, we are referring to freedom, not
+
price. Our General Public Licenses are designed to make sure that you
+
have the freedom to distribute copies of free software (and charge for
+
them if you wish), that you receive source code or can get it if you
+
want it, that you can change the software or use pieces of it in new
+
free programs, and that you know you can do these things.
+
+
To protect your rights, we need to prevent others from denying you
+
these rights or asking you to surrender the rights. Therefore, you have
+
certain responsibilities if you distribute copies of the software, or if
+
you modify it: responsibilities to respect the freedom of others.
+
+
For example, if you distribute copies of such a program, whether
+
gratis or for a fee, you must pass on to the recipients the same
+
freedoms that you received. You must make sure that they, too, receive
+
or can get the source code. And you must show them these terms so they
+
know their rights.
+
+
Developers that use the GNU GPL protect your rights with two steps:
+
(1) assert copyright on the software, and (2) offer you this License
+
giving you legal permission to copy, distribute and/or modify it.
+
+
For the developers' and authors' protection, the GPL clearly explains
+
that there is no warranty for this free software. For both users' and
+
authors' sake, the GPL requires that modified versions be marked as
+
changed, so that their problems will not be attributed erroneously to
+
authors of previous versions.
+
+
Some devices are designed to deny users access to install or run
+
modified versions of the software inside them, although the manufacturer
+
can do so. This is fundamentally incompatible with the aim of
+
protecting users' freedom to change the software. The systematic
+
pattern of such abuse occurs in the area of products for individuals to
+
use, which is precisely where it is most unacceptable. Therefore, we
+
have designed this version of the GPL to prohibit the practice for those
+
products. If such problems arise substantially in other domains, we
+
stand ready to extend this provision to those domains in future versions
+
of the GPL, as needed to protect the freedom of users.
+
+
Finally, every program is threatened constantly by software patents.
+
States should not allow patents to restrict development and use of
+
software on general-purpose computers, but in those that do, we wish to
+
avoid the special danger that patents applied to a free program could
+
make it effectively proprietary. To prevent this, the GPL assures that
+
patents cannot be used to render the program non-free.
+
+
The precise terms and conditions for copying, distribution and
+
modification follow.
+
+
TERMS AND CONDITIONS
+
+
0. Definitions.
+
+
"This License" refers to version 3 of the GNU General Public License.
+
+
"Copyright" also means copyright-like laws that apply to other kinds of
+
works, such as semiconductor masks.
+
+
"The Program" refers to any copyrightable work licensed under this
+
License. Each licensee is addressed as "you". "Licensees" and
+
"recipients" may be individuals or organizations.
+
+
To "modify" a work means to copy from or adapt all or part of the work
+
in a fashion requiring copyright permission, other than the making of an
+
exact copy. The resulting work is called a "modified version" of the
+
earlier work or a work "based on" the earlier work.
+
+
A "covered work" means either the unmodified Program or a work based
+
on the Program.
+
+
To "propagate" a work means to do anything with it that, without
+
permission, would make you directly or secondarily liable for
+
infringement under applicable copyright law, except executing it on a
+
computer or modifying a private copy. Propagation includes copying,
+
distribution (with or without modification), making available to the
+
public, and in some countries other activities as well.
+
+
To "convey" a work means any kind of propagation that enables other
+
parties to make or receive copies. Mere interaction with a user through
+
a computer network, with no transfer of a copy, is not conveying.
+
+
An interactive user interface displays "Appropriate Legal Notices"
+
to the extent that it includes a convenient and prominently visible
+
feature that (1) displays an appropriate copyright notice, and (2)
+
tells the user that there is no warranty for the work (except to the
+
extent that warranties are provided), that licensees may convey the
+
work under this License, and how to view a copy of this License. If
+
the interface presents a list of user commands or options, such as a
+
menu, a prominent item in the list meets this criterion.
+
+
1. Source Code.
+
+
The "source code" for a work means the preferred form of the work
+
for making modifications to it. "Object code" means any non-source
+
form of a work.
+
+
A "Standard Interface" means an interface that either is an official
+
standard defined by a recognized standards body, or, in the case of
+
interfaces specified for a particular programming language, one that
+
is widely used among developers working in that language.
+
+
The "System Libraries" of an executable work include anything, other
+
than the work as a whole, that (a) is included in the normal form of
+
packaging a Major Component, but which is not part of that Major
+
Component, and (b) serves only to enable use of the work with that
+
Major Component, or to implement a Standard Interface for which an
+
implementation is available to the public in source code form. A
+
"Major Component", in this context, means a major essential component
+
(kernel, window system, and so on) of the specific operating system
+
(if any) on which the executable work runs, or a compiler used to
+
produce the work, or an object code interpreter used to run it.
+
+
The "Corresponding Source" for a work in object code form means all
+
the source code needed to generate, install, and (for an executable
+
work) run the object code and to modify the work, including scripts to
+
control those activities. However, it does not include the work's
+
System Libraries, or general-purpose tools or generally available free
+
programs which are used unmodified in performing those activities but
+
which are not part of the work. For example, Corresponding Source
+
includes interface definition files associated with source files for
+
the work, and the source code for shared libraries and dynamically
+
linked subprograms that the work is specifically designed to require,
+
such as by intimate data communication or control flow between those
+
subprograms and other parts of the work.
+
+
The Corresponding Source need not include anything that users
+
can regenerate automatically from other parts of the Corresponding
+
Source.
+
+
The Corresponding Source for a work in source code form is that
+
same work.
+
+
2. Basic Permissions.
+
+
All rights granted under this License are granted for the term of
+
copyright on the Program, and are irrevocable provided the stated
+
conditions are met. This License explicitly affirms your unlimited
+
permission to run the unmodified Program. The output from running a
+
covered work is covered by this License only if the output, given its
+
content, constitutes a covered work. This License acknowledges your
+
rights of fair use or other equivalent, as provided by copyright law.
+
+
You may make, run and propagate covered works that you do not
+
convey, without conditions so long as your license otherwise remains
+
in force. You may convey covered works to others for the sole purpose
+
of having them make modifications exclusively for you, or provide you
+
with facilities for running those works, provided that you comply with
+
the terms of this License in conveying all material for which you do
+
not control copyright. Those thus making or running the covered works
+
for you must do so exclusively on your behalf, under your direction
+
and control, on terms that prohibit them from making any copies of
+
your copyrighted material outside their relationship with you.
+
+
Conveying under any other circumstances is permitted solely under
+
the conditions stated below. Sublicensing is not allowed; section 10
+
makes it unnecessary.
+
+
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
+
+
No covered work shall be deemed part of an effective technological
+
measure under any applicable law fulfilling obligations under article
+
11 of the WIPO copyright treaty adopted on 20 December 1996, or
+
similar laws prohibiting or restricting circumvention of such
+
measures.
+
+
When you convey a covered work, you waive any legal power to forbid
+
circumvention of technological measures to the extent such circumvention
+
is effected by exercising rights under this License with respect to
+
the covered work, and you disclaim any intention to limit operation or
+
modification of the work as a means of enforcing, against the work's
+
users, your or third parties' legal rights to forbid circumvention of
+
technological measures.
+
+
4. Conveying Verbatim Copies.
+
+
You may convey verbatim copies of the Program's source code as you
+
receive it, in any medium, provided that you conspicuously and
+
appropriately publish on each copy an appropriate copyright notice;
+
keep intact all notices stating that this License and any
+
non-permissive terms added in accord with section 7 apply to the code;
+
keep intact all notices of the absence of any warranty; and give all
+
recipients a copy of this License along with the Program.
+
+
You may charge any price or no price for each copy that you convey,
+
and you may offer support or warranty protection for a fee.
+
+
5. Conveying Modified Source Versions.
+
+
You may convey a work based on the Program, or the modifications to
+
produce it from the Program, in the form of source code under the
+
terms of section 4, provided that you also meet all of these conditions:
+
+
a) The work must carry prominent notices stating that you modified
+
it, and giving a relevant date.
+
+
b) The work must carry prominent notices stating that it is
+
released under this License and any conditions added under section
+
7. This requirement modifies the requirement in section 4 to
+
"keep intact all notices".
+
+
c) You must license the entire work, as a whole, under this
+
License to anyone who comes into possession of a copy. This
+
License will therefore apply, along with any applicable section 7
+
additional terms, to the whole of the work, and all its parts,
+
regardless of how they are packaged. This License gives no
+
permission to license the work in any other way, but it does not
+
invalidate such permission if you have separately received it.
+
+
d) If the work has interactive user interfaces, each must display
+
Appropriate Legal Notices; however, if the Program has interactive
+
interfaces that do not display Appropriate Legal Notices, your
+
work need not make them do so.
+
+
A compilation of a covered work with other separate and independent
+
works, which are not by their nature extensions of the covered work,
+
and which are not combined with it such as to form a larger program,
+
in or on a volume of a storage or distribution medium, is called an
+
"aggregate" if the compilation and its resulting copyright are not
+
used to limit the access or legal rights of the compilation's users
+
beyond what the individual works permit. Inclusion of a covered work
+
in an aggregate does not cause this License to apply to the other
+
parts of the aggregate.
+
+
6. Conveying Non-Source Forms.
+
+
You may convey a covered work in object code form under the terms
+
of sections 4 and 5, provided that you also convey the
+
machine-readable Corresponding Source under the terms of this License,
+
in one of these ways:
+
+
a) Convey the object code in, or embodied in, a physical product
+
(including a physical distribution medium), accompanied by the
+
Corresponding Source fixed on a durable physical medium
+
customarily used for software interchange.
+
+
b) Convey the object code in, or embodied in, a physical product
+
(including a physical distribution medium), accompanied by a
+
written offer, valid for at least three years and valid for as
+
long as you offer spare parts or customer support for that product
+
model, to give anyone who possesses the object code either (1) a
+
copy of the Corresponding Source for all the software in the
+
product that is covered by this License, on a durable physical
+
medium customarily used for software interchange, for a price no
+
more than your reasonable cost of physically performing this
+
conveying of source, or (2) access to copy the
+
Corresponding Source from a network server at no charge.
+
+
c) Convey individual copies of the object code with a copy of the
+
written offer to provide the Corresponding Source. This
+
alternative is allowed only occasionally and noncommercially, and
+
only if you received the object code with such an offer, in accord
+
with subsection 6b.
+
+
d) Convey the object code by offering access from a designated
+
place (gratis or for a charge), and offer equivalent access to the
+
Corresponding Source in the same way through the same place at no
+
further charge. You need not require recipients to copy the
+
Corresponding Source along with the object code. If the place to
+
copy the object code is a network server, the Corresponding Source
+
may be on a different server (operated by you or a third party)
+
that supports equivalent copying facilities, provided you maintain
+
clear directions next to the object code saying where to find the
+
Corresponding Source. Regardless of what server hosts the
+
Corresponding Source, you remain obligated to ensure that it is
+
available for as long as needed to satisfy these requirements.
+
+
e) Convey the object code using peer-to-peer transmission, provided
+
you inform other peers where the object code and Corresponding
+
Source of the work are being offered to the general public at no
+
charge under subsection 6d.
+
+
A separable portion of the object code, whose source code is excluded
+
from the Corresponding Source as a System Library, need not be
+
included in conveying the object code work.
+
+
A "User Product" is either (1) a "consumer product", which means any
+
tangible personal property which is normally used for personal, family,
+
or household purposes, or (2) anything designed or sold for incorporation
+
into a dwelling. In determining whether a product is a consumer product,
+
doubtful cases shall be resolved in favor of coverage. For a particular
+
product received by a particular user, "normally used" refers to a
+
typical or common use of that class of product, regardless of the status
+
of the particular user or of the way in which the particular user
+
actually uses, or expects or is expected to use, the product. A product
+
is a consumer product regardless of whether the product has substantial
+
commercial, industrial or non-consumer uses, unless such uses represent
+
the only significant mode of use of the product.
+
+
"Installation Information" for a User Product means any methods,
+
procedures, authorization keys, or other information required to install
+
and execute modified versions of a covered work in that User Product from
+
a modified version of its Corresponding Source. The information must
+
suffice to ensure that the continued functioning of the modified object
+
code is in no case prevented or interfered with solely because
+
modification has been made.
+
+
If you convey an object code work under this section in, or with, or
+
specifically for use in, a User Product, and the conveying occurs as
+
part of a transaction in which the right of possession and use of the
+
User Product is transferred to the recipient in perpetuity or for a
+
fixed term (regardless of how the transaction is characterized), the
+
Corresponding Source conveyed under this section must be accompanied
+
by the Installation Information. But this requirement does not apply
+
if neither you nor any third party retains the ability to install
+
modified object code on the User Product (for example, the work has
+
been installed in ROM).
+
+
The requirement to provide Installation Information does not include a
+
requirement to continue to provide support service, warranty, or updates
+
for a work that has been modified or installed by the recipient, or for
+
the User Product in which it has been modified or installed. Access to a
+
network may be denied when the modification itself materially and
+
adversely affects the operation of the network or violates the rules and
+
protocols for communication across the network.
+
+
Corresponding Source conveyed, and Installation Information provided,
+
in accord with this section must be in a format that is publicly
+
documented (and with an implementation available to the public in
+
source code form), and must require no special password or key for
+
unpacking, reading or copying.
+
+
7. Additional Terms.
+
+
"Additional permissions" are terms that supplement the terms of this
+
License by making exceptions from one or more of its conditions.
+
Additional permissions that are applicable to the entire Program shall
+
be treated as though they were included in this License, to the extent
+
that they are valid under applicable law. If additional permissions
+
apply only to part of the Program, that part may be used separately
+
under those permissions, but the entire Program remains governed by
+
this License without regard to the additional permissions.
+
+
When you convey a copy of a covered work, you may at your option
+
remove any additional permissions from that copy, or from any part of
+
it. (Additional permissions may be written to require their own
+
removal in certain cases when you modify the work.) You may place
+
additional permissions on material, added by you to a covered work,
+
for which you have or can give appropriate copyright permission.
+
+
Notwithstanding any other provision of this License, for material you
+
add to a covered work, you may (if authorized by the copyright holders of
+
that material) supplement the terms of this License with terms:
+
+
a) Disclaiming warranty or limiting liability differently from the
+
terms of sections 15 and 16 of this License; or
+
+
b) Requiring preservation of specified reasonable legal notices or
+
author attributions in that material or in the Appropriate Legal
+
Notices displayed by works containing it; or
+
+
c) Prohibiting misrepresentation of the origin of that material, or
+
requiring that modified versions of such material be marked in
+
reasonable ways as different from the original version; or
+
+
d) Limiting the use for publicity purposes of names of licensors or
+
authors of the material; or
+
+
e) Declining to grant rights under trademark law for use of some
+
trade names, trademarks, or service marks; or
+
+
f) Requiring indemnification of licensors and authors of that
+
material by anyone who conveys the material (or modified versions of
+
it) with contractual assumptions of liability to the recipient, for
+
any liability that these contractual assumptions directly impose on
+
those licensors and authors.
+
+
All other non-permissive additional terms are considered "further
+
restrictions" within the meaning of section 10. If the Program as you
+
received it, or any part of it, contains a notice stating that it is
+
governed by this License along with a term that is a further
+
restriction, you may remove that term. If a license document contains
+
a further restriction but permits relicensing or conveying under this
+
License, you may add to a covered work material governed by the terms
+
of that license document, provided that the further restriction does
+
not survive such relicensing or conveying.
+
+
If you add terms to a covered work in accord with this section, you
+
must place, in the relevant source files, a statement of the
+
additional terms that apply to those files, or a notice indicating
+
where to find the applicable terms.
+
+
Additional terms, permissive or non-permissive, may be stated in the
+
form of a separately written license, or stated as exceptions;
+
the above requirements apply either way.
+
+
8. Termination.
+
+
You may not propagate or modify a covered work except as expressly
+
provided under this License. Any attempt otherwise to propagate or
+
modify it is void, and will automatically terminate your rights under
+
this License (including any patent licenses granted under the third
+
paragraph of section 11).
+
+
However, if you cease all violation of this License, then your
+
license from a particular copyright holder is reinstated (a)
+
provisionally, unless and until the copyright holder explicitly and
+
finally terminates your license, and (b) permanently, if the copyright
+
holder fails to notify you of the violation by some reasonable means
+
prior to 60 days after the cessation.
+
+
Moreover, your license from a particular copyright holder is
+
reinstated permanently if the copyright holder notifies you of the
+
violation by some reasonable means, this is the first time you have
+
received notice of violation of this License (for any work) from that
+
copyright holder, and you cure the violation prior to 30 days after
+
your receipt of the notice.
+
+
Termination of your rights under this section does not terminate the
+
licenses of parties who have received copies or rights from you under
+
this License. If your rights have been terminated and not permanently
+
reinstated, you do not qualify to receive new licenses for the same
+
material under section 10.
+
+
9. Acceptance Not Required for Having Copies.
+
+
You are not required to accept this License in order to receive or
+
run a copy of the Program. Ancillary propagation of a covered work
+
occurring solely as a consequence of using peer-to-peer transmission
+
to receive a copy likewise does not require acceptance. However,
+
nothing other than this License grants you permission to propagate or
+
modify any covered work. These actions infringe copyright if you do
+
not accept this License. Therefore, by modifying or propagating a
+
covered work, you indicate your acceptance of this License to do so.
+
+
10. Automatic Licensing of Downstream Recipients.
+
+
Each time you convey a covered work, the recipient automatically
+
receives a license from the original licensors, to run, modify and
+
propagate that work, subject to this License. You are not responsible
+
for enforcing compliance by third parties with this License.
+
+
An "entity transaction" is a transaction transferring control of an
+
organization, or substantially all assets of one, or subdividing an
+
organization, or merging organizations. If propagation of a covered
+
work results from an entity transaction, each party to that
+
transaction who receives a copy of the work also receives whatever
+
licenses to the work the party's predecessor in interest had or could
+
give under the previous paragraph, plus a right to possession of the
+
Corresponding Source of the work from the predecessor in interest, if
+
the predecessor has it or can get it with reasonable efforts.
+
+
You may not impose any further restrictions on the exercise of the
+
rights granted or affirmed under this License. For example, you may
+
not impose a license fee, royalty, or other charge for exercise of
+
rights granted under this License, and you may not initiate litigation
+
(including a cross-claim or counterclaim in a lawsuit) alleging that
+
any patent claim is infringed by making, using, selling, offering for
+
sale, or importing the Program or any portion of it.
+
+
11. Patents.
+
+
A "contributor" is a copyright holder who authorizes use under this
+
License of the Program or a work on which the Program is based. The
+
work thus licensed is called the contributor's "contributor version".
+
+
A contributor's "essential patent claims" are all patent claims
+
owned or controlled by the contributor, whether already acquired or
+
hereafter acquired, that would be infringed by some manner, permitted
+
by this License, of making, using, or selling its contributor version,
+
but do not include claims that would be infringed only as a
+
consequence of further modification of the contributor version. For
+
purposes of this definition, "control" includes the right to grant
+
patent sublicenses in a manner consistent with the requirements of
+
this License.
+
+
Each contributor grants you a non-exclusive, worldwide, royalty-free
+
patent license under the contributor's essential patent claims, to
+
make, use, sell, offer for sale, import and otherwise run, modify and
+
propagate the contents of its contributor version.
+
+
In the following three paragraphs, a "patent license" is any express
+
agreement or commitment, however denominated, not to enforce a patent
+
(such as an express permission to practice a patent or covenant not to
+
sue for patent infringement). To "grant" such a patent license to a
+
party means to make such an agreement or commitment not to enforce a
+
patent against the party.
+
+
If you convey a covered work, knowingly relying on a patent license,
+
and the Corresponding Source of the work is not available for anyone
+
to copy, free of charge and under the terms of this License, through a
+
publicly available network server or other readily accessible means,
+
then you must either (1) cause the Corresponding Source to be so
+
available, or (2) arrange to deprive yourself of the benefit of the
+
patent license for this particular work, or (3) arrange, in a manner
+
consistent with the requirements of this License, to extend the patent
+
license to downstream recipients. "Knowingly relying" means you have
+
actual knowledge that, but for the patent license, your conveying the
+
covered work in a country, or your recipient's use of the covered work
+
in a country, would infringe one or more identifiable patents in that
+
country that you have reason to believe are valid.
+
+
If, pursuant to or in connection with a single transaction or
+
arrangement, you convey, or propagate by procuring conveyance of, a
+
covered work, and grant a patent license to some of the parties
+
receiving the covered work authorizing them to use, propagate, modify
+
or convey a specific copy of the covered work, then the patent license
+
you grant is automatically extended to all recipients of the covered
+
work and works based on it.
+
+
A patent license is "discriminatory" if it does not include within
+
the scope of its coverage, prohibits the exercise of, or is
+
conditioned on the non-exercise of one or more of the rights that are
+
specifically granted under this License. You may not convey a covered
+
work if you are a party to an arrangement with a third party that is
+
in the business of distributing software, under which you make payment
+
to the third party based on the extent of your activity of conveying
+
the work, and under which the third party grants, to any of the
+
parties who would receive the covered work from you, a discriminatory
+
patent license (a) in connection with copies of the covered work
+
conveyed by you (or copies made from those copies), or (b) primarily
+
for and in connection with specific products or compilations that
+
contain the covered work, unless you entered into that arrangement,
+
or that patent license was granted, prior to 28 March 2007.
+
+
Nothing in this License shall be construed as excluding or limiting
+
any implied license or other defenses to infringement that may
+
otherwise be available to you under applicable patent law.
+
+
12. No Surrender of Others' Freedom.
+
+
If conditions are imposed on you (whether by court order, agreement or
+
otherwise) that contradict the conditions of this License, they do not
+
excuse you from the conditions of this License. If you cannot convey a
+
covered work so as to satisfy simultaneously your obligations under this
+
License and any other pertinent obligations, then as a consequence you may
+
not convey it at all. For example, if you agree to terms that obligate you
+
to collect a royalty for further conveying from those to whom you convey
+
the Program, the only way you could satisfy both those terms and this
+
License would be to refrain entirely from conveying the Program.
+
+
13. Use with the GNU Affero General Public License.
+
+
Notwithstanding any other provision of this License, you have
+
permission to link or combine any covered work with a work licensed
+
under version 3 of the GNU Affero General Public License into a single
+
combined work, and to convey the resulting work. The terms of this
+
License will continue to apply to the part which is the covered work,
+
but the special requirements of the GNU Affero General Public License,
+
section 13, concerning interaction through a network will apply to the
+
combination as such.
+
+
14. Revised Versions of this License.
+
+
The Free Software Foundation may publish revised and/or new versions of
+
the GNU General Public License from time to time. Such new versions will
+
be similar in spirit to the present version, but may differ in detail to
+
address new problems or concerns.
+
+
Each version is given a distinguishing version number. If the
+
Program specifies that a certain numbered version of the GNU General
+
Public License "or any later version" applies to it, you have the
+
option of following the terms and conditions either of that numbered
+
version or of any later version published by the Free Software
+
Foundation. If the Program does not specify a version number of the
+
GNU General Public License, you may choose any version ever published
+
by the Free Software Foundation.
+
+
If the Program specifies that a proxy can decide which future
+
versions of the GNU General Public License can be used, that proxy's
+
public statement of acceptance of a version permanently authorizes you
+
to choose that version for the Program.
+
+
Later license versions may give you additional or different
+
permissions. However, no additional obligations are imposed on any
+
author or copyright holder as a result of your choosing to follow a
+
later version.
+
+
15. Disclaimer of Warranty.
+
+
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
+
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
+
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
+
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
+
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
+
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
+
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
+
+
16. Limitation of Liability.
+
+
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
+
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
+
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
+
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
+
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
+
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
+
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
+
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
+
SUCH DAMAGES.
+
+
17. Interpretation of Sections 15 and 16.
+
+
If the disclaimer of warranty and limitation of liability provided
+
above cannot be given local legal effect according to their terms,
+
reviewing courts shall apply local law that most closely approximates
+
an absolute waiver of all civil liability in connection with the
+
Program, unless a warranty or assumption of liability accompanies a
+
copy of the Program in return for a fee.
+
+
END OF TERMS AND CONDITIONS
+
+
How to Apply These Terms to Your New Programs
+
+
If you develop a new program, and you want it to be of the greatest
+
possible use to the public, the best way to achieve this is to make it
+
free software which everyone can redistribute and change under these terms.
+
+
To do so, attach the following notices to the program. It is safest
+
to attach them to the start of each source file to most effectively
+
state the exclusion of warranty; and each file should have at least
+
the "copyright" line and a pointer to where the full notice is found.
+
+
<one line to give the program's name and a brief idea of what it does.>
+
Copyright (C) <year> <name of author>
+
+
This program is free software: you can redistribute it and/or modify
+
it under the terms of the GNU General Public License as published by
+
the Free Software Foundation, either version 3 of the License, or
+
(at your option) any later version.
+
+
This program is distributed in the hope that it will be useful,
+
but WITHOUT ANY WARRANTY; without even the implied warranty of
+
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+
GNU General Public License for more details.
+
+
You should have received a copy of the GNU General Public License
+
along with this program. If not, see <https://www.gnu.org/licenses/>.
+
+
Also add information on how to contact you by electronic and paper mail.
+
+
If the program does terminal interaction, make it output a short
+
notice like this when it starts in an interactive mode:
+
+
<program> Copyright (C) <year> <name of author>
+
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
+
This is free software, and you are welcome to redistribute it
+
under certain conditions; type `show c' for details.
+
+
The hypothetical commands `show w' and `show c' should show the appropriate
+
parts of the General Public License. Of course, your program's commands
+
might be different; for a GUI interface, you would use an "about box".
+
+
You should also get your employer (if you work as a programmer) or school,
+
if any, to sign a "copyright disclaimer" for the program, if necessary.
+
For more information on this, and how to apply and follow the GNU GPL, see
+
<https://www.gnu.org/licenses/>.
+
+
The GNU General Public License does not permit incorporating your program
+
into proprietary programs. If your program is a subroutine library, you
+
may consider it more useful to permit linking proprietary applications with
+
the library. If this is what you want to do, use the GNU Lesser General
+
Public License instead of this License. But first, please read
+
<https://www.gnu.org/licenses/why-not-lgpl.html>.
+
+6
README.md
···
+
# Frostium Operating System
+
+
This is the monorepo for the Frostium Operating System. Run
+
`nix develop` to get a dev env, and `zig build` to build for the
+
default amd64 architecture. To run the operating system in a `qemu`
+
emulator, run `zig build qemu`.
+9
assets/limine.conf
···
+
/+Frostium Kernels
+
//AMD64 Kernel
+
protocol: limine
+
path: boot():/kernel-amd64.elf
+
module_path: boot():/init-amd64.elf
+
//aarch64 Kernel
+
protocol: limine
+
path: boot():/kernel-aarch64.elf
+
module_path: boot():/init-aarch64.elf
+54
build.zig
···
+
const std = @import("std");
+
const build_helpers = @import("build_helpers");
+
+
pub fn build(b: *std.Build) void {
+
const arch = b.option(build_helpers.Architecture, "arch", "The target architecture") orelse .amd64;
+
+
const ukernel_dep = b.dependency("ukernel", .{
+
.arch = arch,
+
});
+
const ukernel_artifact = ukernel_dep.artifact("ukernel");
+
const ukernel_inst = b.addInstallFile(ukernel_artifact.getEmittedBin(), arch.kernelExeName());
+
b.default_step.dependOn(&ukernel_inst.step);
+
+
const root_dep = b.dependency("root_server", .{
+
.arch = arch,
+
});
+
const root_artifact = root_dep.artifact("root_server");
+
const root_inst = b.addInstallFile(root_artifact.getEmittedBin(), arch.rootTaskName());
+
b.default_step.dependOn(&root_inst.step);
+
+
// Run in QEMU
+
run_blk: {
+
// Step 1: Install edk2 files to zig-out
+
const ovmf_code, const ovmf_vars = blk: {
+
const ovmf_dep = b.lazyDependency("edk2_binary", .{}) orelse break :run_blk;
+
break :blk .{
+
ovmf_dep.path("bin/RELEASEX64_OVMF_CODE.fd"),
+
ovmf_dep.path("bin/RELEASEX64_OVMF_VARS.fd"),
+
};
+
};
+
+
const loader_path = blk: {
+
const limine_dep = b.lazyDependency("limine_binary", .{}) orelse break :run_blk;
+
break :blk limine_dep.path("BOOTX64.EFI");
+
};
+
+
const code_install = b.addInstallFile(ovmf_code, "OVMF_CODE_X64.fd");
+
const vars_install = b.addInstallFile(ovmf_vars, "OVMF_VARS_X64.fd");
+
const loader_install = b.addInstallFileWithDir(loader_path, .{ .custom = "EFI/BOOT" }, "BOOTX64.EFI");
+
const config_install = b.addInstallFileWithDir(b.path("assets/limine.conf"), .{ .custom = "limine" }, "limine.conf");
+
+
const qemu_prepare_step = b.step("qemu_prepare", "Prepare for QEMU run");
+
qemu_prepare_step.dependOn(&code_install.step);
+
qemu_prepare_step.dependOn(&vars_install.step);
+
qemu_prepare_step.dependOn(&loader_install.step);
+
qemu_prepare_step.dependOn(&config_install.step);
+
+
const qemu_cmd = b.addSystemCommand(&.{ "qemu-system-x86_64", "-smp", "4", "-m", "4G", "-monitor", "stdio", "-drive", "format=raw,file=fat:rw:zig-out", "-drive", "if=pflash,format=raw,readonly=on,file=zig-out/OVMF_CODE_X64.fd", "-drive", "if=pflash,format=raw,file=zig-out/OVMF_VARS_X64.fd" });
+
const qemu_step = b.step("qemu", "Run in QEMU");
+
qemu_step.dependOn(b.default_step);
+
qemu_step.dependOn(qemu_prepare_step);
+
qemu_step.dependOn(&qemu_cmd.step);
+
}
+
}
+37
build.zig.zon
···
+
.{
+
.name = .frostium,
+
.version = "0.0.1",
+
.fingerprint = 0x8d91c7f5c1556c13,
+
.minimum_zig_version = "0.15.1",
+
.dependencies = .{
+
.ukernel = .{
+
.path = "components/ukernel",
+
},
+
.root_server = .{
+
.path = "components/root_server",
+
},
+
.build_helpers = .{
+
.path = "components/build_helpers",
+
},
+
.limine_binary = .{
+
.url = "git+https://codeberg.org/Limine/Limine?ref=v9.x-binary#acf1e35c4685dba7ef271013db375a727c340ff7",
+
.hash = "N-V-__8AAOkzSACT_9p6kmSSly1l008erzXuG39Z6r54B_y0",
+
// Codeberg is always down so better to leave it not lazy
+
// .lazy = true,
+
},
+
.edk2_binary = .{
+
.url = "git+https://github.com/retrage/edk2-nightly#23068f498687bf64f2b8f80fbcf11e82d987fd9b",
+
.hash = "N-V-__8AADFwUgat_qAH_zWVQeUqhpPP05V2Gr_XYRAqhIkb",
+
.lazy = true,
+
},
+
},
+
.paths = .{
+
"build.zig",
+
"build.zig.zon",
+
"flake.nix",
+
"flake.lock",
+
"LICENSE",
+
"README.md",
+
"ukernel",
+
},
+
}
+9
components/build_helpers/build.zig
···
+
const std = @import("std");
+
const build_helpers = @import("root.zig");
+
pub const Architecture = build_helpers.Architecture;
+
+
pub fn build(b: *std.Build) void {
+
_ = b.addModule("build_helpers", .{
+
.root_source_file = b.path("root.zig"),
+
});
+
}
+32
components/build_helpers/root.zig
···
+
const std = @import("std");
+
+
pub const Architecture = enum {
+
const Self = @This();
+
aarch64,
+
riscv64,
+
amd64,
+
+
pub fn get(self: *const Self) std.Target.Cpu.Arch {
+
return switch (self.*) {
+
.aarch64 => .aarch64,
+
.riscv64 => .riscv64,
+
.amd64 => .x86_64,
+
};
+
}
+
+
pub fn kernelExeName(self: *const Self) []const u8 {
+
return switch (self.*) {
+
.aarch64 => "kernel-aarch64.elf",
+
.riscv64 => "kernel-riscv64.elf",
+
.amd64 => "kernel-amd64.elf",
+
};
+
}
+
+
pub fn rootTaskName(self: *const Self) []const u8 {
+
return switch (self.*) {
+
.aarch64 => "init-aarch64.elf",
+
.riscv64 => "init-riscv64.elf",
+
.amd64 => "init-amd64.elf",
+
};
+
}
+
};
+35
components/root_server/build.zig
···
+
const std = @import("std");
+
const build_helpers = @import("build_helpers");
+
+
pub fn build(b: *std.Build) void {
+
const arch = b.option(build_helpers.Architecture, "arch", "The target root_server architecture") orelse .amd64;
+
+
// set CPU features based on the architecture
+
const target = b.resolveTargetQuery(.{
+
.cpu_arch = arch.get(),
+
.os_tag = .freestanding,
+
.abi = .none,
+
});
+
const optimize = b.standardOptimizeOption(.{});
+
+
const main_mod = b.createModule(.{
+
.root_source_file = b.path("src/main.zig"),
+
.target = target,
+
.optimize = optimize,
+
});
+
+
const config = b.addOptions();
+
config.addOption(build_helpers.Architecture, "arch", arch);
+
+
const build_helpers_dep = b.dependency("build_helpers", .{});
+
+
main_mod.addImport("config", config.createModule());
+
main_mod.addImport("build_helpers", build_helpers_dep.module("build_helpers"));
+
+
const exe = b.addExecutable(.{
+
.name = "root_server",
+
.root_module = main_mod,
+
});
+
+
b.installArtifact(exe);
+
}
+14
components/root_server/build.zig.zon
···
+
.{
+
.name = .root_server,
+
.version = "0.0.0",
+
.fingerprint = 0x78d73937eee4ef25,
+
.minimum_zig_version = "0.15.1",
+
.dependencies = .{
+
.build_helpers = .{ .path = "../build_helpers" },
+
},
+
.paths = .{
+
"build.zig",
+
"build.zig.zon",
+
"src",
+
},
+
}
+10
components/root_server/src/main.zig
···
+
const std = @import("std");
+
const os = @import("os.zig");
+
+
export fn _start() callconv(.c) noreturn {
+
_ = os.syscall1(SYS_poke, 0xB16B00B5BADBABE);
+
_ = os.syscall1(SYS_exit, 0);
+
unreachable;
+
}
+
pub const SYS_exit = 69;
+
pub const SYS_poke = 420;
+38
components/root_server/src/os.zig
···
+
const config = @import("config");
+
const build_helpers = @import("build_helpers");
+
+
pub const syscall1 = switch (config.arch) {
+
.aarch64 => Aarch64.syscall1,
+
.amd64 => Amd64.syscall1,
+
.riscv64 => Riscv64.syscall1,
+
};
+
+
const Aarch64 = struct {
+
pub fn syscall1(number: usize, arg1: usize) usize {
+
return asm volatile ("svc #0"
+
: [ret] "={x0}" (-> usize),
+
: [number] "{x8}" (number),
+
[arg1] "{x0}" (arg1),
+
: .{ .memory = true });
+
}
+
};
+
+
const Amd64 = struct {
+
pub fn syscall1(number: usize, arg1: usize) usize {
+
return asm volatile ("syscall"
+
: [ret] "={rax}" (-> usize),
+
: [number] "{rax}" (number),
+
[arg1] "{rdi}" (arg1),
+
: .{ .rcx = true, .r11 = true });
+
}
+
};
+
+
const Riscv64 = struct {
+
pub fn syscall1(number: usize, arg1: usize) usize {
+
return asm volatile ("ecall"
+
: [ret] "={x10}" (-> usize),
+
: [number] "{x17}" (number),
+
[arg1] "{x10}" (arg1),
+
: .{ .memory = true });
+
}
+
};
+101
components/ukernel/arch/aarch64/boot.zig
···
+
const limine = @import("limine");
+
const std = @import("std");
+
const arch = @import("root.zig");
+
const common = @import("common");
+
const console = @import("console");
+
const log = std.log.scoped(.aarch64_init);
+
+
pub const limine_requests = struct {
+
// export var start_marker: limine.RequestsStartMarker linksection(".limine_reqs_start") = .{};
+
// export var end_marker: limine.RequestsEndMarker linksection(".limine_reqs_end") = .{};
+
+
pub export var base_revision: limine.BaseRevision = .{ .revision = 3 };
+
pub export var framebuffer: limine.FramebufferRequest = .{};
+
pub export var hhdm: limine.HhdmRequest = .{};
+
pub export var memmap: limine.MemoryMapRequest = .{};
+
pub export var rsdp_req: limine.RsdpRequest = .{};
+
pub export var dtb_req: limine.DtbRequest = .{};
+
pub export var modules: limine.ModuleRequest = .{};
+
pub export var mp: limine.SmpMpFeature.MpRequest = .{};
+
};
+
+
pub fn bsp_init() callconv(.c) noreturn {
+
if (limine_requests.framebuffer.response) |fb_response| {
+
if (fb_response.framebuffer_count > 0) {
+
const fb = console.Framebuffer.from_limine(fb_response.getFramebuffers()[0]);
+
common.init_data.framebuffer = fb;
+
@memset(fb.address[0..64], 0xFF);
+
}
+
}
+
arch.instructions.die();
+
// Don't optimize away the limine requests
+
inline for (@typeInfo(limine_requests).@"struct".decls) |decl| {
+
std.mem.doNotOptimizeAway(&@field(limine_requests, decl.name));
+
}
+
+
// If the base revision isn't supported, we can't boot
+
if (!limine_requests.base_revision.isSupported()) {
+
@branchHint(.cold);
+
arch.instructions.die();
+
}
+
+
// Die if we don't have a memory map or Higher Half Direct Mapping
+
if (limine_requests.memmap.response == null) {
+
@branchHint(.cold);
+
arch.instructions.die();
+
}
+
+
if (limine_requests.hhdm.response == null) {
+
@branchHint(.cold);
+
arch.instructions.die();
+
}
+
const hhdm_offset = limine_requests.hhdm.response.?.offset;
+
common.init_data.hhdm_slide = hhdm_offset;
+
+
// Add in a framebuffer if found
+
initConsole();
+
+
// Add in ACPI/dtb if found
+
initHwDesc();
+
+
// Set up the temporary Physical Memory Allocator
+
common.mm.bootmem.init();
+
+
// Attach the root task
+
if (limine_requests.modules.response) |module_response| {
+
if (module_response.module_count > 0) {
+
const mod = module_response.modules.?[0];
+
const mod_addr: [*]align(4096) u8 = @ptrCast(mod.address);
+
const mod_size = mod.size;
+
log.info("Loading root task with {s} @ {*}", .{ mod.path, mod.address });
+
common.init_data.root_task = mod_addr[0..mod_size];
+
}
+
} else {
+
@branchHint(.unlikely);
+
@panic("No root task found!");
+
}
+
+
log.info("Nothing else to do!", .{});
+
+
arch.instructions.die();
+
}
+
+
fn initConsole() void {
+
if (limine_requests.framebuffer.response) |fb_response| {
+
if (fb_response.framebuffer_count > 0) {
+
const fb = console.Framebuffer.from_limine(fb_response.getFramebuffers()[0]);
+
common.init_data.framebuffer = fb;
+
// At this point, log becomes usable
+
common.init_data.console = console.Console.from_font(fb, console.DefaultFont);
+
}
+
}
+
}
+
+
fn initHwDesc() void {
+
if (limine_requests.dtb_req.response) |dtb_response| {
+
common.init_data.hardware_description = .{ .dtb = dtb_response.dtb_ptr };
+
}
+
if (limine_requests.rsdp_req.response) |rsdp_response| {
+
common.init_data.hardware_description = .{ .acpi_rsdp = rsdp_response.address };
+
}
+
}
+5
components/ukernel/arch/aarch64/instructions.zig
···
+
pub inline fn die() noreturn {
+
while (true) {
+
asm volatile ("wfi");
+
}
+
}
+45
components/ukernel/arch/aarch64/linker.ld
···
+
OUTPUT_FORMAT(elf64-littleaarch64)
+
ENTRY(_start)
+
+
PHDRS {
+
limine_reqs PT_LOAD;
+
text PT_LOAD;
+
rodata PT_LOAD;
+
data PT_LOAD;
+
dynamic PT_DYNAMIC;
+
}
+
+
SECTIONS {
+
. = 0xffffffff80000000;
+
+
.text : {
+
*(.text .text.*)
+
} :text
+
+
. = ALIGN(CONSTANT(MAXPAGESIZE));
+
+
.rodata : {
+
*(.rodata .rodata.*)
+
} :rodata
+
+
. = ALIGN(CONSTANT(MAXPAGESIZE));
+
+
.data : {
+
*(.data .data.*)
+
+
} :data
+
+
.dynamic : {
+
*(.dynamic)
+
} :data :dynamic
+
+
.bss : {
+
*(.bss .bss.*)
+
*(COMMON)
+
} :data
+
+
/DISCARD/ : {
+
*(.eh_frame*)
+
*(.note .note.*)
+
}
+
}
+27
components/ukernel/arch/aarch64/root.zig
···
+
pub const boot = @import("boot.zig");
+
pub const instructions = @import("instructions.zig");
+
// pub const structures = @import("structures/root.zig");
+
// pub const registers = @import("registers.zig");
+
const common = @import("common");
+
const std = @import("std");
+
+
// Early BSP init may override this if a more optimal
+
// page size is chosen.
+
var negotiated_page_size: u32 = 4096;
+
+
fn pageSize() usize {
+
return @intCast(negotiated_page_size);
+
}
+
+
pub const std_options: std.Options = .{
+
.logFn = common.aux.logFn,
+
.page_size_min = 4 << 10,
+
.page_size_max = 64 << 10,
+
.queryPageSize = pageSize,
+
};
+
pub const panic = std.debug.FullPanic(common.aux.panic);
+
+
comptime {
+
// Entry point (_start)
+
@export(&boot.bsp_init, .{ .name = "_start", .linkage = .strong });
+
}
+42
components/ukernel/arch/amd64/asm/traps.S
···
+
.section .text
+
+
.globl syscall_entry
+
.type syscall_entry, %function
+
syscall_entry:
+
+
# swapgs in the future
+
push %rcx
+
push %r11
+
push %r15
+
push %r9
+
push %r8
+
push %r10
+
push %rdx
+
push %r14
+
push %r13
+
push %r12
+
push %rbp
+
push %rbx
+
push %rax
+
push %rsi
+
push %rdi
+
+
call syscall_handler
+
+
pop %rdi
+
pop %rsi
+
pop %rax
+
pop %rbx
+
pop %rbp
+
pop %r12
+
pop %r13
+
pop %r14
+
pop %rdx
+
pop %r10
+
pop %r8
+
pop %r9
+
pop %r15
+
pop %r11
+
pop %rcx
+
+
sysretq
+364
components/ukernel/arch/amd64/boot.zig
···
+
const limine = @import("limine");
+
const std = @import("std");
+
const arch = @import("root.zig");
+
const common = @import("common");
+
const console = @import("console");
+
const log = std.log.scoped(.amd64_init);
+
const Idt = arch.structures.Idt;
+
const StandardGdt = arch.structures.gdt.StandardGdt;
+
const Tss = arch.structures.tss.Tss;
+
+
var per_cpu_init_data: PerCpuInitData = .{};
+
+
pub const limine_requests = struct {
+
export var start_marker: limine.RequestsStartMarker linksection(".limine_reqs_start") = .{};
+
export var end_marker: limine.RequestsEndMarker linksection(".limine_reqs_end") = .{};
+
+
pub export var base_revision: limine.BaseRevision linksection(".limine_reqs") = .{ .revision = 3 };
+
pub export var framebuffer: limine.FramebufferRequest linksection(".limine_reqs") = .{};
+
pub export var hhdm: limine.HhdmRequest linksection(".limine_reqs") = .{};
+
pub export var memmap: limine.MemoryMapRequest linksection(".limine_reqs") = .{};
+
pub export var rsdp_req: limine.RsdpRequest linksection(".limine_reqs") = .{};
+
pub export var dtb_req: limine.DtbRequest linksection(".limine_reqs") = .{};
+
pub export var modules: limine.ModuleRequest linksection(".limine_reqs") = .{};
+
pub export var mp: limine.SmpMpFeature.MpRequest linksection(".limine_reqs") = .{ .flags = .{ .x2apic = true } };
+
};
+
+
pub fn bsp_init() callconv(.c) noreturn {
+
// Don't optimize away the limine requests
+
inline for (@typeInfo(limine_requests).@"struct".decls) |decl| {
+
std.mem.doNotOptimizeAway(&@field(limine_requests, decl.name));
+
}
+
+
// If the base revision isn't supported, we can't boot
+
if (!limine_requests.base_revision.isSupported()) {
+
@branchHint(.cold);
+
arch.instructions.die();
+
}
+
+
// Die if we don't have a memory map or Higher Half Direct Mapping
+
if (limine_requests.memmap.response == null) {
+
@branchHint(.cold);
+
arch.instructions.die();
+
}
+
+
if (limine_requests.hhdm.response == null) {
+
@branchHint(.cold);
+
arch.instructions.die();
+
}
+
const hhdm_offset = limine_requests.hhdm.response.?.offset;
+
common.init_data.hhdm_slide = hhdm_offset;
+
+
// Add in a framebuffer if found
+
initConsole();
+
+
// Add in ACPI/dtb if found, prefer ACPI
+
initHwDesc();
+
+
// Set up the temporary Physical Memory Allocator
+
common.mm.bootmem.init();
+
+
// Attach the root task
+
if (limine_requests.modules.response) |module_response| {
+
if (module_response.module_count > 0) {
+
const mod = module_response.modules.?[0];
+
const mod_addr: [*]align(4096) u8 = @ptrCast(mod.address);
+
const mod_size = mod.size;
+
log.info("Loading root task with {s} @ {*}", .{ mod.path, mod.address });
+
common.init_data.root_task = mod_addr[0..mod_size];
+
}
+
} else {
+
@branchHint(.unlikely);
+
@panic("No root task found!");
+
}
+
+
// Initialize per-cpu data (GDT and TSS)
+
per_cpu_init_data.init();
+
+
// Install the IDT
+
initIdt();
+
+
// AP bootstrap
+
bootstrapAPs();
+
+
// Set up our own GDT and TSS
+
const gdt = &per_cpu_init_data.gdt_buf[0];
+
gdt.* = .{};
+
const tss = &per_cpu_init_data.tss_buf[0];
+
// TSS rsp 0x3800
+
tss.* = .{
+
.rsp0 = 0x3800,
+
.rsp1 = 0x3800,
+
.rsp2 = 0x3800,
+
};
+
+
gdt.tss_desc.set_tss_addr(tss);
+
gdt.load();
+
log.info("BSP successfully setup GDT+TSS!", .{});
+
+
log.info("Allocating code for userspace...", .{});
+
+
const user_code = common.init_data.bootmem.allocPhys(0x1000) catch @panic("alloc bruh");
+
const user_stack = common.init_data.bootmem.allocPhys(0x1000) catch @panic("alloc bruh2");
+
+
// TODO: load the actual root task ELF for fucks sake
+
// instead of the current glorified shellcode
+
+
// Map our executable page to 0x1000
+
common.mm.paging.mapPhys(.{
+
.vaddr = 0x8000,
+
.paddr = user_code,
+
.size = 0x1000,
+
.memory_type = .MemoryWriteBack,
+
.perms = .{
+
.executable = true,
+
.writable = false,
+
.userspace_accessible = true,
+
},
+
}) catch {
+
@panic("Mapping failed!!!!!");
+
};
+
+
// Map our stack page to 0x3000 (starts at 0x4000)
+
common.mm.paging.mapPhys(.{
+
.vaddr = 0x3000,
+
.paddr = user_stack,
+
.size = 0x1000,
+
.memory_type = .MemoryWriteBack,
+
.perms = .{
+
.executable = false,
+
.writable = true,
+
.userspace_accessible = true,
+
},
+
}) catch {
+
@panic("Mapping failed!!!!!");
+
};
+
+
// Place shellcode there (does a couple syscalls then jmp $ infinite loop)
+
const memory: [*]u8 = common.mm.physToHHDM([*]u8, user_code);
+
const shellcode = [_]u8{ 0x48, 0xBF, 0xE1, 0xAB, 0xDB, 0xBA, 0xB5, 0x00, 0x6B, 0xB1, 0x48, 0xBE, 0x06, 0x42, 0x69, 0x20, 0x94, 0x06, 0x42, 0x69, 0x0F, 0x05, 0xBF, 0x11, 0xCA, 0x00, 0x00, 0xBE, 0x69, 0x69, 0x69, 0x69, 0x0F, 0x05, 0xEB, 0xFE };
+
@memcpy(memory[0..@sizeOf(@TypeOf(shellcode))], shellcode[0..]);
+
+
// Set up MSRs to enable syscalls
+
init_syscalls();
+
+
// Finally, iretq ourselves into this bitch
+
enter_userspace(0x8000, 0x69, 0x3800);
+
}
+
+
// Get ready for system calls (set MSRs)
+
fn init_syscalls() void {
+
// Set up the STAR MSR with the segment descriptors
+
const IA32_STAR = arch.registers.MSR(u64, 0xC0000081);
+
const star_value: u64 = 0 | @as(u64, arch.structures.gdt.StandardGdt.selectors.kernel_code) << 32 | (@as(u64, arch.structures.gdt.StandardGdt.selectors.tss_desc + 8) | 3) << 48;
+
IA32_STAR.write(star_value);
+
log.debug("Wrote 0x{x:0>16} to IA32_STAR", .{star_value});
+
+
// Set up the EFER MSR with SCE (System Call Enable)
+
const IA32_EFER = arch.registers.MSR(u64, 0xC0000080);
+
const efer_val = IA32_EFER.read() | 0b1;
+
IA32_EFER.write(efer_val);
+
log.debug("Wrote 0x{x:0>16} to IA32_EFER", .{efer_val});
+
+
// Set up LSTAR with the syscall handler and FMASK to clear interrupts
+
const IA32_LSTAR = arch.registers.MSR(u64, 0xC0000082);
+
IA32_LSTAR.write(@intFromPtr(syscall_entry));
+
+
log.debug("Wrote 0x{x:0>16} to IA32_LSTAR", .{@intFromPtr(syscall_entry)});
+
+
const IA32_FMASK = arch.registers.MSR(u64, 0xC0000084);
+
IA32_FMASK.write(1 << 9);
+
log.debug("Wrote 0x{x:0>16} to IA32_FMASK", .{1 << 9});
+
}
+
+
const syscall_entry = @extern(*anyopaque, .{
+
.name = "syscall_entry",
+
});
+
export fn syscall_handler(rdi: usize, rsi: usize) callconv(.c) void {
+
std.log.info("Got a syscall! rdi=0x{x}, rsi=0x{x}", .{ rdi, rsi });
+
}
+
+
fn enter_userspace(entry: u64, arg: u64, stack: u64) noreturn {
+
log.info("usercode64 GDT 0x{x}, userdata64 GDT 0x{x}", .{ arch.structures.gdt.StandardGdt.selectors.user_code, arch.structures.gdt.StandardGdt.selectors.user_data });
+
const cr3 = arch.registers.ControlRegisters.Cr3.read();
+
arch.registers.ControlRegisters.Cr3.write(cr3);
+
asm volatile (
+
\\ push %[userdata64]
+
\\ push %[stack]
+
\\ push $0x2
+
\\ push %[usercode64]
+
\\ push %[entry]
+
\\
+
\\ mov %[userdata64], %%rax
+
\\ mov %%rax, %%es
+
\\ mov %%rax, %%ds
+
\\
+
\\ xor %%rsi, %%rsi
+
\\ xor %%rax, %%rax
+
\\ xor %%rdx, %%rdx
+
\\ xor %%rcx, %%rcx
+
\\ xor %%rbp, %%rbp
+
\\ xor %%rbx, %%rbx
+
\\
+
\\ xor %%r8, %%r8
+
\\ xor %%r9, %%r9
+
\\ xor %%r10, %%r10
+
\\ xor %%r11, %%r11
+
\\ xor %%r12, %%r12
+
\\ xor %%r13, %%r13
+
\\ xor %%r14, %%r14
+
\\ xor %%r15, %%r15
+
\\
+
\\ iretq
+
\\
+
:
+
: [arg] "{rdi}" (arg),
+
[stack] "r" (stack),
+
[entry] "r" (entry),
+
[userdata64] "i" (arch.structures.gdt.StandardGdt.selectors.user_data),
+
[usercode64] "i" (arch.structures.gdt.StandardGdt.selectors.user_code),
+
);
+
unreachable;
+
}
+
+
fn initConsole() void {
+
if (limine_requests.framebuffer.response) |fb_response| {
+
if (fb_response.framebuffer_count > 0) {
+
const fb = console.Framebuffer.from_limine(fb_response.getFramebuffers()[0]);
+
common.init_data.framebuffer = fb;
+
// At this point, log becomes usable
+
common.init_data.console = console.Console.from_font(fb, console.DefaultFont);
+
common.init_data.console.?.setColor(0x3bcf1d, 0);
+
}
+
}
+
}
+
+
fn initHwDesc() void {
+
if (limine_requests.dtb_req.response) |dtb_response| {
+
common.init_data.hardware_description = .{ .dtb = dtb_response.dtb_ptr };
+
}
+
if (limine_requests.rsdp_req.response) |rsdp_response| {
+
common.init_data.hardware_description = .{ .acpi_rsdp = rsdp_response.address };
+
}
+
}
+
+
pub fn initIdt() void {
+
const idt_addr: usize = @intFromPtr(per_cpu_init_data.idt);
+
+
// Install the known exception handlers
+
per_cpu_init_data.idt.breakpoint.installHandler(breakpoint_handler);
+
per_cpu_init_data.idt.double_fault.installHandler(double_fault);
+
per_cpu_init_data.idt.general_protection_fault.installHandler(gpf);
+
per_cpu_init_data.idt.page_fault.installHandler(page_fault);
+
+
// Load the Idt Register
+
const reg: Idt.Idtr = .{ .addr = idt_addr, .limit = @sizeOf(Idt) - 1 };
+
reg.load();
+
}
+
+
// TODO: update the type reflection thing to make a custom
+
// function type for the ISR
+
pub const PageFaultErrorCode = packed struct {
+
present: bool,
+
write: bool,
+
user: bool,
+
reserved_write: bool,
+
instruction_fetch: bool,
+
protection_key: bool,
+
shadow_stack: bool,
+
_reserved: u8,
+
sgx: bool,
+
_reserved2: u48,
+
+
pub fn val(self: *const PageFaultErrorCode) u64 {
+
return @bitCast(self.*);
+
}
+
};
+
pub fn page_fault(stack_frame: *arch.structures.Idt.InterruptStackFrame, err_code_u64: u64) callconv(.{ .x86_64_interrupt = .{} }) void {
+
const err_code: PageFaultErrorCode = @bitCast(err_code_u64);
+
log.err("PAGE FAULT @ 0x{x:0>16}, code 0x{x}!!!!!!!!!!!", .{ stack_frame.instruction_pointer, err_code.val() });
+
const cr2 = arch.registers.ControlRegisters.Cr2.read();
+
switch (err_code.write) {
+
true => log.err("Tried to write to vaddr 0x{x:0>16}", .{cr2}),
+
false => log.err("Tried to read from vaddr 0x{x:0>16}", .{cr2}),
+
}
+
log.err("dying...", .{});
+
arch.instructions.die();
+
}
+
+
pub fn breakpoint_handler(stack_frame: *Idt.InterruptStackFrame) callconv(.{ .x86_64_interrupt = .{} }) void {
+
log.warn("Breakpoint @ 0x{x:0>16}, returning execution...", .{stack_frame.instruction_pointer});
+
}
+
+
pub fn gpf(stack_frame: *Idt.InterruptStackFrame, err_code: u64) callconv(.{ .x86_64_interrupt = .{} }) void {
+
log.warn("gpf @ 0x{x:0>16} ERR CODE {}, returning execution...", .{ stack_frame.instruction_pointer, err_code });
+
arch.instructions.die();
+
}
+
+
pub fn double_fault(stack_frame: *Idt.InterruptStackFrame, err_code: u64) callconv(.{ .x86_64_interrupt = .{} }) noreturn {
+
common.init_data.console.?.setColor(0xf40d17, 0);
+
log.err("FATAL DOUBLE FAULT @ 0x{x:0>16}, code 0x{x}!!!!!!!!!!!", .{ stack_frame.instruction_pointer, err_code });
+
log.err("dying...", .{});
+
arch.instructions.die();
+
}
+
+
fn bootstrapAPs() void {
+
log.info("Bootstrapping APs...", .{});
+
const cpus = limine_requests.mp.response.?.getCpus();
+
for (cpus) |cpu| {
+
cpu.goto_address = ap_init;
+
}
+
}
+
+
fn ap_init(mp_info: *limine.SmpMpFeature.MpInfo) callconv(.c) noreturn {
+
// Set up the IDT
+
const idt_addr: usize = @intFromPtr(per_cpu_init_data.idt);
+
const reg: Idt.Idtr = .{ .addr = idt_addr, .limit = @sizeOf(Idt) - 1 };
+
reg.load();
+
+
// Set up our GDT and TSS
+
const gdt = &per_cpu_init_data.gdt_buf[mp_info.processor_id];
+
gdt.* = .{};
+
const tss = &per_cpu_init_data.tss_buf[mp_info.processor_id];
+
tss.* = .{};
+
+
gdt.tss_desc.set_tss_addr(tss);
+
gdt.load();
+
+
log.info("CPU {}: setup GDT and TSS, killing myself rn...", .{mp_info.processor_id});
+
+
arch.instructions.die();
+
}
+
+
const PerCpuInitData = struct {
+
gdt_buf: []StandardGdt = undefined,
+
tss_buf: []Tss = undefined,
+
idt: *Idt = undefined,
+
+
const Self = @This();
+
pub fn init(self: *Self) void {
+
// 1. Allocate an IDT
+
const idt_addr = common.init_data.bootmem.allocMem(@sizeOf(Idt)) catch |err| {
+
std.log.err("init PerCpuInitData: IDT alloc failed: {}", .{err});
+
@panic("rip bozo");
+
};
+
self.idt = @ptrFromInt(idt_addr);
+
+
// 2. Allocate space for GDT and TSS data
+
const cpu_count = limine_requests.mp.response.?.cpu_count;
+
const gdt_size = @sizeOf(StandardGdt);
+
const tss_size = @sizeOf(Tss);
+
+
const total_required_size = gdt_size * cpu_count + tss_size * cpu_count;
+
const buf: [*]u8 = @ptrFromInt(common.init_data.bootmem.allocMem(total_required_size) catch |err| {
+
std.log.err("init PerCpuInitData: GDT/TSS alloc failed: {}", .{err});
+
@panic("rip bozo");
+
});
+
+
// 3. Transmute and fill out the structure
+
const gdt_buf: [*]StandardGdt = @ptrCast(@alignCast(buf[0 .. gdt_size * cpu_count]));
+
const tss_buf: [*]Tss = @ptrCast(@alignCast(buf[gdt_size * cpu_count ..][0 .. tss_size * cpu_count]));
+
self.gdt_buf = gdt_buf[0..cpu_count];
+
self.tss_buf = tss_buf[0..cpu_count];
+
}
+
};
+5
components/ukernel/arch/amd64/instructions.zig
···
+
pub inline fn die() noreturn {
+
while (true) {
+
asm volatile ("hlt");
+
}
+
}
+52
components/ukernel/arch/amd64/linker.ld
···
+
OUTPUT_FORMAT(elf64-x86-64)
+
ENTRY(_start)
+
+
PHDRS {
+
limine_reqs PT_LOAD;
+
text PT_LOAD;
+
rodata PT_LOAD;
+
data PT_LOAD;
+
dynamic PT_DYNAMIC;
+
}
+
+
SECTIONS {
+
. = 0xffffffff80000000;
+
+
.limine_reqs : {
+
KEEP(*(.limine_reqs_start))
+
KEEP(*(.limine_reqs))
+
KEEP(*(.limine_reqs_end))
+
} : limine_reqs
+
+
. = ALIGN(CONSTANT(MAXPAGESIZE));
+
+
.text : {
+
*(.text .text.*)
+
} :text
+
+
. = ALIGN(CONSTANT(MAXPAGESIZE));
+
+
.rodata : {
+
*(.rodata .rodata.*)
+
} :rodata
+
+
. = ALIGN(CONSTANT(MAXPAGESIZE));
+
+
.data : {
+
*(.data .data.*)
+
} :data
+
+
.dynamic : {
+
*(.dynamic)
+
} :data :dynamic
+
+
.bss : {
+
*(.bss .bss.*)
+
*(COMMON)
+
} :data
+
+
/DISCARD/ : {
+
*(.eh_frame*)
+
*(.note .note.*)
+
}
+
}
+279
components/ukernel/arch/amd64/mm/paging.zig
···
+
const common = @import("common");
+
const arch = @import("../root.zig");
+
const std = @import("std");
+
const physToVirt = common.mm.physToHHDM;
+
const Perms = common.mm.paging.Perms;
+
+
pub const page_sizes = [_]usize{
+
0x1000, // 4K
+
0x200000, // 2M
+
0x40000000, // 1G
+
0x8000000000, // 512G
+
0x1000000000000, // 256T
+
};
+
+
pub const PageTable = extern struct {
+
entries: [512]Entry,
+
+
pub const Entry = packed struct {
+
present: bool,
+
writable: bool,
+
user_accessible: bool,
+
write_through: bool,
+
disable_cache: bool,
+
accessed: bool,
+
dirty: bool,
+
huge: bool,
+
global: bool,
+
idk: u3,
+
phys_addr: u40,
+
idk2: u11,
+
nx: bool,
+
+
const Self = @This();
+
+
pub fn getAddr(self: *const Self) u64 {
+
return self.phys_addr << 12;
+
}
+
+
pub fn setAddr(self: *Self, phys_addr: u64) void {
+
const addr = phys_addr >> 12;
+
self.phys_addr = @truncate(addr);
+
}
+
};
+
};
+
+
fn extract_index_from_vaddr(vaddr: u64, level: u6) u9 {
+
const shamt = 12 + level * 9;
+
return @truncate(vaddr >> shamt);
+
}
+
+
pub const TypedPTE = union(common.mm.paging.PTEType) {
+
Mapping: MappingHandle,
+
Table: TableHandle,
+
Empty,
+
+
const Self = @This();
+
+
pub fn decode(pte: *PageTable.Entry, level: u3) Self {
+
if (!pte.present) {
+
return .Empty;
+
}
+
if (!pte.huge and level != 0) {
+
return .{ .Table = decode_table(pte, level) };
+
}
+
return .{ .Mapping = decode_mapping(pte, level) };
+
}
+
+
pub fn decode_table(pte: *PageTable.Entry, level: u3) TableHandle {
+
return .{
+
.phys_addr = pte.getAddr(),
+
.level = level,
+
.underlying = pte,
+
.perms = .{
+
.writable = pte.writable,
+
.executable = !pte.nx,
+
.userspace_accessible = pte.user_accessible,
+
},
+
};
+
}
+
+
pub fn decode_mapping(pte: *PageTable.Entry, level: u3) MappingHandle {
+
return .{
+
.phys_addr = pte.getAddr(),
+
.level = level,
+
// TODO: memory types
+
.memory_type = null,
+
.underlying = pte,
+
.perms = .{
+
.writable = pte.writable,
+
.executable = !pte.nx,
+
.userspace_accessible = pte.user_accessible,
+
},
+
};
+
}
+
};
+
+
pub const MappingHandle = struct {
+
phys_addr: usize,
+
level: u3,
+
memory_type: ?MemoryType,
+
perms: Perms,
+
underlying: *PageTable.Entry,
+
};
+
+
pub const TableHandle = struct {
+
phys_addr: usize,
+
level: u3,
+
perms: Perms,
+
underlying: ?*PageTable.Entry,
+
+
const Self = @This();
+
+
// Get the child entries of this page table
+
pub fn get_children(self: *const Self) []PageTable.Entry {
+
const page_table = physToVirt(*PageTable, self.phys_addr);
+
return page_table.entries[0..];
+
}
+
+
// Get children from the position holding the table and on
+
pub fn skip_to(self: *const Self, vaddr: usize) []PageTable.Entry {
+
return self.get_children()[extract_index_from_vaddr(vaddr, self.level - 1)..];
+
}
+
+
// Decode child table given an entry
+
pub fn decode_child(self: *const Self, pte: *PageTable.Entry) TypedPTE {
+
return TypedPTE.decode(pte, self.level - 1);
+
}
+
+
pub fn addPerms(self: *const Self, perms: Perms) void {
+
if (perms.executable) {
+
self.underlying.?.nx = false;
+
}
+
if (perms.writable) {
+
self.underlying.?.writable = true;
+
}
+
if (perms.userspace_accessible) {
+
self.underlying.?.user_accessible = true;
+
}
+
}
+
+
pub fn child_domain(self: *const Self, vaddr: usize) UntypedSlice {
+
return domain(vaddr, self.level - 1);
+
}
+
+
pub fn make_child_table(self: *const Self, pte: *PageTable.Entry, perms: Perms) !TableHandle {
+
const pmem = try make_page_table();
+
+
const result: TableHandle = .{
+
.phys_addr = pmem,
+
.level = self.level - 1,
+
.perms = perms,
+
.underlying = pte,
+
};
+
pte.* = encode_table(result);
+
+
return result;
+
}
+
+
pub fn make_child_mapping(
+
self: *const Self,
+
pte: *PageTable.Entry,
+
paddr: ?usize,
+
perms: Perms,
+
memory_type: MemoryType,
+
) !MappingHandle {
+
const page_size = page_sizes[self.level - 1];
+
const pmem = paddr orelse try common.init_data.bootmem.allocPhys(page_size);
+
+
const result: MappingHandle = .{
+
.level = self.level - 1,
+
.memory_type = memory_type,
+
.perms = perms,
+
.underlying = pte,
+
.phys_addr = pmem,
+
};
+
+
pte.* = encode_mapping(result);
+
+
return result;
+
}
+
};
+
+
pub fn root_table(vaddr: usize) TableHandle {
+
_ = vaddr;
+
const cr3_val = arch.registers.ControlRegisters.Cr3.read() & 0xFFFF_FFFF_FFFF_F000;
+
return .{
+
.phys_addr = cr3_val,
+
// TODO: detect and support 5 level paging!
+
.level = 4,
+
.perms = .{
+
.executable = true,
+
.writable = true,
+
},
+
.underlying = null,
+
};
+
}
+
+
fn encode_table(pte_handle: TableHandle) PageTable.Entry {
+
var pte = std.mem.zeroes(PageTable.Entry);
+
+
pte.setAddr(pte_handle.phys_addr);
+
pte.writable = pte_handle.perms.writable;
+
pte.user_accessible = pte_handle.perms.userspace_accessible;
+
pte.nx = !pte_handle.perms.executable;
+
pte.present = true;
+
pte.huge = false;
+
+
return pte;
+
}
+
+
fn encode_mapping(pte_handle: MappingHandle) PageTable.Entry {
+
var pte = std.mem.zeroes(PageTable.Entry);
+
+
pte.setAddr(pte_handle.phys_addr);
+
pte.present = true;
+
+
if (pte_handle.level != 0) {
+
pte.huge = true;
+
}
+
+
pte.writable = pte_handle.perms.writable;
+
pte.user_accessible = pte_handle.perms.userspace_accessible;
+
pte.nx = !pte_handle.perms.executable;
+
+
encode_memory_type(&pte, pte_handle);
+
+
return pte;
+
}
+
+
fn encode_memory_type(pte: *PageTable.Entry, pte_handle: MappingHandle) void {
+
const mt = pte_handle.memory_type orelse @panic("Unknown memory type");
+
+
// TODO: Page Attribute Table
+
switch (mt) {
+
.MemoryWritethrough => pte.write_through = true,
+
.DeviceUncacheable => pte.disable_cache = true,
+
.MemoryWriteBack => {},
+
else => @panic("Cannot set memory type"),
+
}
+
}
+
+
/// Returns physical address
+
fn make_page_table() !usize {
+
const pt_phys = try common.init_data.bootmem.allocPhys(std.heap.pageSize());
+
const pt = physToVirt([*]u8, pt_phys);
+
@memset(pt[0..std.heap.pageSize()], 0x00);
+
return pt_phys;
+
}
+
+
pub fn invalidate(vaddr: u64) void {
+
asm volatile (
+
\\ invlpg (%[vaddr])
+
:
+
: [vaddr] "r" (vaddr),
+
: .{ .memory = true });
+
}
+
+
const UntypedSlice = struct {
+
len: usize,
+
ptr: usize,
+
};
+
+
pub fn domain(vaddr: usize, level: u3) UntypedSlice {
+
return .{
+
.len = page_sizes[level],
+
.ptr = vaddr & ~(page_sizes[level] - 1),
+
};
+
}
+
+
pub const MemoryType = enum {
+
DeviceUncacheable,
+
DeviceWriteCombining,
+
MemoryWritethrough,
+
MemoryWriteBack,
+
};
+
+
pub fn can_map_at(level: u3) bool {
+
return level < 2;
+
}
+1
components/ukernel/arch/amd64/mm/root.zig
···
+
pub const paging = @import("paging.zig");
+95
components/ukernel/arch/amd64/registers.zig
···
+
pub fn ControlRegister(comptime T: type, comptime reg: []const u8) type {
+
return struct {
+
pub fn read() T {
+
return asm volatile ("mov %%" ++ reg ++ ", %[output]"
+
: [output] "=r" (-> T),
+
);
+
}
+
+
pub fn write(value: T) void {
+
asm volatile ("mov %[input], %%" ++ reg
+
:
+
: [input] "r" (value),
+
);
+
}
+
};
+
}
+
+
pub fn GeneralPurpose(comptime T: type, comptime reg: []const u8) type {
+
return struct {
+
pub fn read() T {
+
return asm volatile ("mov %%" ++ reg ++ ", %[output]"
+
: [output] "=r" (-> T),
+
);
+
}
+
+
pub fn write(value: T) void {
+
asm volatile ("mov %[input], %%" ++ reg
+
:
+
: [input] "r" (value),
+
);
+
}
+
};
+
}
+
+
pub fn MSR(comptime T: type, comptime num: u32) type {
+
return struct {
+
pub fn read() T {
+
// TODO: switch on bit size to allow custom structs
+
switch (T) {
+
u32 => return asm volatile ("rdmsr"
+
: [_] "={eax}" (-> u32),
+
: [_] "{ecx}" (num),
+
),
+
u64 => {
+
var low_val: u32 = undefined;
+
var high_val: u32 = undefined;
+
asm volatile ("rdmsr"
+
: [_] "={eax}" (low_val),
+
[_] "={edx}" (high_val),
+
: [_] "{ecx}" (num),
+
);
+
return (@as(u64, high_val) << 32) | @as(u64, low_val);
+
},
+
else => @compileError("Unimplemented for type"),
+
}
+
}
+
pub fn write(value: T) void {
+
switch (T) {
+
u32 => asm volatile ("wrmsr"
+
:
+
: [_] "{eax}" (value),
+
[_] "{edx}" (@as(u32, 0)),
+
[_] "{ecx}" (num),
+
),
+
u64 => {
+
const low_val: u32 = @truncate(value);
+
const high_val: u32 = @truncate(value >> 32);
+
asm volatile ("wrmsr"
+
:
+
: [_] "{eax}" (low_val),
+
[_] "{edx}" (high_val),
+
[_] "{ecx}" (num),
+
);
+
},
+
else => @compileError("Unimplemented for type"),
+
}
+
}
+
};
+
}
+
+
pub const ControlRegisters = struct {
+
pub const Cr0 = ControlRegister(u64, "cr0");
+
pub const Cr2 = ControlRegister(u64, "cr2");
+
pub const Cr3 = ControlRegister(u64, "cr3");
+
pub const Cr4 = ControlRegister(u64, "cr4");
+
};
+
+
pub const Segmentation = struct {
+
pub const Cs = GeneralPurpose(u16, "cs");
+
pub const Ds = GeneralPurpose(u16, "ds");
+
pub const Ss = GeneralPurpose(u16, "ss");
+
pub const Es = GeneralPurpose(u16, "es");
+
pub const Fs = GeneralPurpose(u16, "fs");
+
pub const Gs = GeneralPurpose(u16, "gs");
+
};
+24
components/ukernel/arch/amd64/root.zig
···
+
pub const boot = @import("boot.zig");
+
pub const instructions = @import("instructions.zig");
+
pub const mm = @import("mm/root.zig");
+
pub const structures = @import("structures/root.zig");
+
pub const registers = @import("registers.zig");
+
const common = @import("common");
+
const std = @import("std");
+
+
fn pageSize() usize {
+
return 4 << 10;
+
}
+
+
pub const std_options: std.Options = .{
+
.logFn = common.aux.logFn,
+
.page_size_min = 4 << 10,
+
.page_size_max = 4 << 10,
+
.queryPageSize = pageSize,
+
};
+
pub const panic = std.debug.FullPanic(common.aux.panic);
+
+
comptime {
+
// Entry point (_start)
+
@export(&boot.bsp_init, .{ .name = "_start", .linkage = .strong });
+
}
+170
components/ukernel/arch/amd64/structures/Idt.zig
···
+
//! The entire Interrupt Descriptor Table (IDT) structure for AMD64,
+
//! including all the necessary ISR entries. Each of the defined
+
//! ISRs is meant for a specific type of exception, while the
+
//! array at the end of the IDT can be used for whatever is necessary.
+
const std = @import("std");
+
const arch = @import("../root.zig");
+
const StandardGdt = arch.structures.gdt.StandardGdt;
+
+
/// Faulty division (mostly divide by zero)
+
divide_error: Entry(.handler),
+
/// AMD64 Debug Exception, either a fault or a trap
+
debug_exception: Entry(.handler),
+
/// Non Maskable Interrupt
+
non_maskable_interrupt: Entry(.handler),
+
/// Breakpoint (int3) trap
+
breakpoint: Entry(.handler),
+
/// Overflow trap (INTO instruction)
+
overflow: Entry(.handler),
+
/// Bound Range Exception (BOUND instruction)
+
bound_range_exceeded: Entry(.handler),
+
/// Invalid Opcode Exception
+
invalid_opcode: Entry(.handler),
+
/// Device Not Available (FPU instructions when FPU disabled)
+
device_not_available: Entry(.handler),
+
/// Double Fault Exception
+
double_fault: Entry(.abort_with_err_code),
+
_coprocessor_segment_overrun: Entry(.handler),
+
/// Invalid TSS: bad segment selector
+
invalid_tss: Entry(.handler_with_err_code),
+
/// Segment Not Present
+
segment_not_present: Entry(.handler_with_err_code),
+
/// Stack Segment Fault
+
stack_segment_fault: Entry(.handler_with_err_code),
+
/// General Protection Fault
+
general_protection_fault: Entry(.handler_with_err_code),
+
/// Page Fault
+
page_fault: Entry(.handler_with_err_code),
+
+
_reserved1: Entry(.handler),
+
/// x87 Floating Point Exception
+
x87_floating_point: Entry(.handler),
+
/// Alignment Check Exception
+
alignment_check: Entry(.handler_with_err_code),
+
/// Machine Check Exception (MCE)
+
machine_check: Entry(.abort),
+
/// SIMD Floating Point Exception
+
simd_floating_point: Entry(.handler),
+
/// Virtualization Exception
+
virtualization: Entry(.handler),
+
/// Control Protection Exception
+
control_protection: Entry(.handler_with_err_code),
+
_reserved2: [10]Entry(.handler),
+
/// User Accessible Interrupts
+
interrupts: [256 - 32]Entry(.handler),
+
+
/// An ISR Entry in the IDT
+
pub const EntryType = union(enum) {
+
abort: void,
+
abort_with_err_code: void,
+
handler: void,
+
handler_with_err_code: void,
+
handler_with_custom_err_code: type,
+
};
+
pub fn Entry(comptime entry_type: EntryType) type {
+
const return_type = switch (entry_type) {
+
.abort, .abort_with_err_code => noreturn,
+
.handler, .handler_with_err_code, .handler_with_custom_err_code => void,
+
};
+
const params: []const std.builtin.Type.Fn.Param = switch (entry_type) {
+
.handler, .abort => &.{
+
// Interrupt stack frame
+
.{ .is_generic = false, .is_noalias = false, .type = *InterruptStackFrame },
+
},
+
.handler_with_err_code, .abort_with_err_code => &.{
+
// Interrupt stack frame
+
.{ .is_generic = false, .is_noalias = false, .type = *InterruptStackFrame },
+
// Error code
+
.{ .is_generic = false, .is_noalias = false, .type = u64 },
+
},
+
.handler_with_custom_err_code => |err_code_type| &.{
+
// Interrupt stack frame
+
.{ .is_generic = false, .is_noalias = false, .type = *InterruptStackFrame },
+
// Custom Error code
+
.{ .is_generic = false, .is_noalias = false, .type = err_code_type },
+
},
+
};
+
const FunctionTypeInfo: std.builtin.Type = .{
+
.@"fn" = .{
+
.calling_convention = .{ .x86_64_interrupt = .{} },
+
.is_generic = false,
+
.is_var_args = false,
+
.return_type = return_type,
+
.params = params,
+
},
+
};
+
+
// The actual IDT entry structure
+
return extern struct {
+
func_low: u16,
+
gdt_selector: u16,
+
options: Options,
+
func_mid: u16,
+
func_high: u32,
+
_reserved: u32 = 0,
+
+
const FuncType = @Type(FunctionTypeInfo);
+
+
pub const Options = packed struct {
+
/// Interrupt Stack Table Index
+
ist_index: u3,
+
_reserved: u5 = 0,
+
disable_interrupts: bool,
+
must_be_one: u3 = 0b111,
+
must_be_zero: u1 = 0,
+
/// Descriptor Privilege Level
+
dpl: u2,
+
present: bool,
+
};
+
+
const Self = @This();
+
+
pub fn installHandler(self: *Self, func: *const FuncType) void {
+
// Fetch the Code Segment
+
const func_ptr = @intFromPtr(func);
+
self.* = .{
+
// Set the function pointer
+
.func_low = @truncate(func_ptr & 0xFFFF),
+
.func_mid = @truncate((func_ptr >> 16) & 0xFFFF),
+
.func_high = @truncate((func_ptr >> 32) & 0xFFFF_FFFF),
+
.gdt_selector = StandardGdt.selectors.kernel_code,
+
.options = .{
+
// No Interrupt Stack Table yet
+
.ist_index = 0,
+
// Mask interrupts while running ISR handler
+
.disable_interrupts = true,
+
// Ring 3 Minimum privilege level
+
.dpl = 3,
+
// Mark as present
+
.present = true,
+
},
+
};
+
}
+
};
+
}
+
+
/// IDT Register
+
pub const Idtr = packed struct {
+
limit: u16,
+
addr: u64,
+
+
/// Load the IDT Register
+
pub fn load(self: *const Idtr) void {
+
asm volatile ("lidt (%[idtr_addr])"
+
:
+
: [idtr_addr] "r" (self),
+
);
+
}
+
};
+
+
/// Interrupt Stack Frame
+
/// TODO: maybe move this somewhere else
+
pub const InterruptStackFrame = extern struct {
+
instruction_pointer: u64,
+
code_segment: u16,
+
_reserved1: [6]u8,
+
cpu_flags: u64,
+
stack_pointer: u64,
+
stack_segment: u16,
+
_reserved2: [6]u8,
+
};
+167
components/ukernel/arch/amd64/structures/gdt.zig
···
+
//! The Global Descriptor Table (GDT) structure for AMD64
+
const std = @import("std");
+
const arch = @import("../root.zig");
+
+
pub const Descriptor = packed struct {
+
limit_low: u16 = 0,
+
base_low: u16 = 0,
+
base_mid: u8 = 0,
+
access: Access,
+
limit_high: u4 = 0,
+
flags: Flags = .{},
+
base_high: u8 = 0,
+
+
const Self = @This();
+
+
pub const Access = packed struct {
+
// Accessed
+
accessed: bool = true,
+
// Readable/Writable
+
rw: bool = false,
+
// Direction bit or Conforming bit
+
dc: bool = false,
+
// Executable
+
executable: bool,
+
// Descriptor Type bit
+
descriptor_type: DescriptorType = .code_or_data,
+
// Descriptor Privilege Level
+
dpl: u2,
+
// Present bit
+
p: bool = true,
+
+
pub const DescriptorType = enum(u1) {
+
tss = 0,
+
code_or_data = 1,
+
};
+
};
+
+
pub const Flags = packed struct {
+
// Reserved
+
_reserved: u1 = 0,
+
// Long Mode code flag
+
long_mode: bool = true,
+
// Size flag (16 vs 32)
+
db: DB = .protected_16,
+
// Granularity flag
+
granularity: Granularity = .byte,
+
+
pub const Granularity = enum(u1) {
+
byte = 0,
+
page = 1,
+
};
+
+
pub const DB = enum(u1) {
+
protected_16 = 0,
+
protected_32 = 1,
+
};
+
};
+
+
pub const null_desc = std.mem.zeroes(Descriptor);
+
pub const kernel_code: Self = .{ .access = .{ .dpl = 0, .executable = true } };
+
pub const kernel_data: Self = .{ .access = .{ .dpl = 0, .executable = false, .rw = true } };
+
pub const user_code: Self = .{ .access = .{ .dpl = 3, .executable = true } };
+
pub const user_data: Self = .{ .access = .{ .dpl = 3, .executable = false, .rw = true } };
+
};
+
+
pub const StandardGdt = extern struct {
+
null_desc: Descriptor = .null_desc,
+
kernel_code: Descriptor = .kernel_code,
+
kernel_data: Descriptor = .kernel_data,
+
tss_desc: TssDescriptor align(@alignOf(Descriptor)) = .{},
+
user_data: Descriptor = .user_data,
+
user_code: Descriptor = .user_code,
+
+
pub const selectors = struct {
+
pub const null_desc = @offsetOf(StandardGdt, "null_desc");
+
pub const kernel_code = @offsetOf(StandardGdt, "kernel_code");
+
pub const kernel_data = @offsetOf(StandardGdt, "kernel_data");
+
pub const tss_desc = @offsetOf(StandardGdt, "tss_desc");
+
pub const user_data = @offsetOf(StandardGdt, "user_data") | 0b11;
+
pub const user_code = @offsetOf(StandardGdt, "user_code") | 0b11;
+
};
+
+
const Self = @This();
+
+
pub fn load(self: *Self) void {
+
// 1. Load the GDTR
+
const gdtr: Gdtr = .{
+
.limit = @truncate(@sizeOf(StandardGdt) - 1),
+
.base = @intFromPtr(self),
+
};
+
gdtr.load();
+
+
// 2. Set the kernel data segments
+
asm volatile (
+
\\ mov %[sel], %%ds
+
\\ mov %[sel], %%es
+
\\ mov %[sel], %%fs
+
\\ mov %[sel], %%gs
+
\\ mov %[sel], %%ss
+
:
+
: [sel] "rm" (@as(u16, selectors.kernel_data)),
+
);
+
+
// 3. Reload kernel code segments (far return)
+
asm volatile (
+
\\ push %[sel]
+
\\ lea 1f(%%rip), %%rax
+
\\ push %%rax
+
\\ .byte 0x48, 0xcb // retfq
+
\\ 1:
+
:
+
: [sel] "i" (@as(u16, selectors.kernel_code)),
+
: .{ .rax = true });
+
+
// 4. Set the TSS descriptor
+
asm volatile (
+
\\ ltr %[sel]
+
:
+
: [sel] "r" (@as(u16, selectors.tss_desc)),
+
);
+
}
+
};
+
+
pub const Gdtr = packed struct {
+
limit: u16,
+
base: u64,
+
+
pub fn load(self: *const Gdtr) void {
+
asm volatile ("lgdt %[gdtr_ptr]"
+
:
+
: [gdtr_ptr] "*p" (self),
+
);
+
}
+
};
+
+
const TssDescriptor = extern struct {
+
const Low = packed struct {
+
limit_low: u16 = 0,
+
base_low: u16 = 0,
+
base_mid: u8 = 0,
+
seg_type: u4 = 0b1001,
+
_reserved0: u1 = 0b0,
+
dpl: u2 = 0,
+
p: bool = true,
+
limit_high: u4 = 0,
+
unused: u1 = 0,
+
_reserved1: u2 = 0b00,
+
granularity: u1 = 0,
+
base_high: u8 = 0,
+
};
+
descriptor: Low = .{},
+
base_top: u32 = 0,
+
_reserved: u32 = 0,
+
+
const Self = @This();
+
+
pub fn set_tss_addr(self: *Self, tss: *const arch.structures.tss.Tss) void {
+
const tss_ptr: usize = @intFromPtr(tss);
+
// Set the base
+
self.descriptor.base_low = @truncate(tss_ptr);
+
self.descriptor.base_mid = @truncate(tss_ptr >> 16);
+
self.descriptor.base_high = @truncate(tss_ptr >> 24);
+
self.base_top = @truncate(tss_ptr >> 32);
+
// Set the limit
+
self.descriptor.limit_low = @sizeOf(arch.structures.tss.Tss);
+
}
+
};
+3
components/ukernel/arch/amd64/structures/root.zig
···
+
pub const gdt = @import("gdt.zig");
+
pub const tss = @import("tss.zig");
+
pub const Idt = @import("Idt.zig");
+17
components/ukernel/arch/amd64/structures/tss.zig
···
+
pub const Tss = extern struct {
+
_reserved1: u32 = 0,
+
rsp0: u64 align(4) = 0,
+
rsp1: u64 align(4) = 0,
+
rsp2: u64 align(4) = 0,
+
_reserved2: u64 align(4) = 0,
+
ist1: u64 align(4) = 0,
+
ist2: u64 align(4) = 0,
+
ist3: u64 align(4) = 0,
+
ist4: u64 align(4) = 0,
+
ist5: u64 align(4) = 0,
+
ist6: u64 align(4) = 0,
+
ist7: u64 align(4) = 0,
+
_reserved3: u64 align(4) = 0,
+
_reserved4: u16 = 0,
+
io_map_base_address: u16 = @sizeOf(Tss),
+
};
+40
components/ukernel/arch/riscv64/linker.ld
···
+
OUTPUT_FORMAT(elf64-littleriscv)
+
ENTRY(_start)
+
+
PHDRS {
+
text PT_LOAD;
+
rodata PT_LOAD;
+
data PT_LOAD;
+
}
+
+
SECTIONS {
+
. = 0xffffffff80000000;
+
+
.text : {
+
*(.text .text.*)
+
} :text
+
+
. = ALIGN(CONSTANT(MAXPAGESIZE));
+
+
.rodata : {
+
*(.rodata .rodata.*)
+
} :rodata
+
+
. = ALIGN(CONSTANT(MAXPAGESIZE));
+
+
.data : {
+
*(.data .data.*)
+
*(.sdata .sdata.*)
+
} :data
+
+
.bss : {
+
*(.sbss .sbss.*)
+
*(.bss .bss.*)
+
*(COMMON)
+
} :data
+
+
/DISCARD/ : {
+
*(.eh_frame*)
+
*(.note .note.*)
+
}
+
}
+97
components/ukernel/build.zig
···
+
const std = @import("std");
+
const build_helpers = @import("build_helpers");
+
+
pub fn build(b: *std.Build) void {
+
const arch = b.option(build_helpers.Architecture, "arch", "The target ukernel architecture") orelse .amd64;
+
+
// set CPU features based on the architecture
+
var target_query: std.Target.Query = .{
+
.cpu_arch = arch.get(),
+
.os_tag = .freestanding,
+
.abi = .none,
+
};
+
const arch_root_path, const linker_script_path, const code_model: std.builtin.CodeModel = blk: {
+
switch (arch) {
+
.amd64 => {
+
const Feature = std.Target.x86.Feature;
+
+
target_query.cpu_features_add.addFeature(@intFromEnum(Feature.soft_float));
+
target_query.cpu_features_sub.addFeature(@intFromEnum(Feature.mmx));
+
target_query.cpu_features_sub.addFeature(@intFromEnum(Feature.sse));
+
target_query.cpu_features_sub.addFeature(@intFromEnum(Feature.sse2));
+
target_query.cpu_features_sub.addFeature(@intFromEnum(Feature.avx));
+
target_query.cpu_features_sub.addFeature(@intFromEnum(Feature.avx2));
+
+
break :blk .{ "arch/amd64/root.zig", "arch/amd64/linker.ld", .kernel };
+
},
+
.aarch64 => {
+
const Feature = std.Target.aarch64.Feature;
+
+
target_query.cpu_features_sub.addFeature(@intFromEnum(Feature.fp_armv8));
+
target_query.cpu_features_sub.addFeature(@intFromEnum(Feature.crypto));
+
target_query.cpu_features_sub.addFeature(@intFromEnum(Feature.neon));
+
+
break :blk .{ "arch/aarch64/root.zig", "arch/aarch64/linker.ld", .default };
+
},
+
.riscv64 => {
+
const Feature = std.Target.riscv.Feature;
+
target_query.cpu_features_sub.addFeature(@intFromEnum(Feature.d));
+
+
break :blk .{ "arch/riscv64/root.zig", "arch/riscv64/linker.ld", .default };
+
},
+
}
+
};
+
+
const target = b.resolveTargetQuery(target_query);
+
const optimize = b.standardOptimizeOption(.{ .preferred_optimize_mode = .ReleaseSafe });
+
+
const arch_module = b.createModule(.{
+
.root_source_file = b.path(arch_root_path),
+
.target = target,
+
.optimize = optimize,
+
.code_model = code_model,
+
});
+
+
switch (arch) {
+
.amd64 => {
+
arch_module.addAssemblyFile(b.path("arch/amd64/asm/traps.S"));
+
},
+
else => {},
+
}
+
+
const limine_dep = b.dependency("limine", .{
+
.api_revision = 3,
+
});
+
const spinlock_dep = b.dependency("spinlock", .{});
+
const console_dep = b.dependency("console", .{});
+
+
const limine_mod = limine_dep.module("limine");
+
const console_mod = console_dep.module("console");
+
const spinlock_mod = spinlock_dep.module("spinlock");
+
+
const common_mod = b.createModule(.{
+
.root_source_file = b.path("common/root.zig"),
+
});
+
+
arch_module.addImport("limine", limine_mod);
+
arch_module.addImport("console", console_mod);
+
arch_module.addImport("common", common_mod);
+
+
console_mod.addImport("limine", limine_mod);
+
+
common_mod.addImport("arch", arch_module);
+
common_mod.addImport("console", console_mod);
+
common_mod.addImport("spinlock", spinlock_mod);
+
+
const kernel = b.addExecutable(.{
+
.name = "ukernel",
+
.root_module = arch_module,
+
// TODO: remove when x86 backend is less broken with removing CPU features
+
.use_llvm = true,
+
});
+
+
kernel.pie = false;
+
kernel.want_lto = true;
+
kernel.setLinkerScript(b.path(linker_script_path));
+
b.installArtifact(kernel);
+
}
+18
components/ukernel/build.zig.zon
···
+
.{
+
.name = .ukernel,
+
.fingerprint = 0xcf2c1bfa85f3f299,
+
.version = "0.0.1",
+
.minimum_zig_version = "0.15.1",
+
.dependencies = .{
+
.limine = .{ .path = "deps/limine-zig" },
+
.spinlock = .{ .path = "deps/spinlock" },
+
.console = .{ .path = "deps/console" },
+
.build_helpers = .{ .path = "../build_helpers" },
+
},
+
.paths = .{
+
"build.zig",
+
"build.zig.zon",
+
"deps",
+
"arch",
+
},
+
}
+73
components/ukernel/common/aux.zig
···
+
const console = @import("console");
+
const common = @import("root.zig");
+
const mm = common.mm;
+
const std = @import("std");
+
const arch = @import("arch");
+
const spinlock = @import("spinlock");
+
+
// Types
+
pub const HardwareDescription = union(enum) {
+
/// Physical address of ACPI RSDP
+
acpi_rsdp: usize,
+
/// Virtual pointer to DTB
+
dtb: *anyopaque,
+
none,
+
};
+
+
pub const InitState = struct {
+
bootmem: mm.bootmem.BootPmm = .{},
+
console: ?console.Console = null,
+
framebuffer: ?console.Framebuffer = null,
+
hardware_description: HardwareDescription = .none,
+
root_task: []align(4096) u8 = undefined,
+
hhdm_slide: usize = 0,
+
};
+
+
var stdout_lock: spinlock.Spinlock = .{};
+
+
pub fn logFn(
+
comptime message_level: std.log.Level,
+
comptime scope: @TypeOf(.enum_literal),
+
comptime format: []const u8,
+
args: anytype,
+
) void {
+
if (common.init_data.console == null) return;
+
+
// Use the same naming as the default logger
+
const level, const color = switch (message_level) {
+
.debug => .{ "D", 0x3bcf1d },
+
.err => .{ "E", 0xff0000 },
+
.info => .{ "I", 0x00bbbb },
+
.warn => .{ "W", 0xfee409 },
+
};
+
// Use same format as default once again
+
const scope_text = switch (scope) {
+
.default => "",
+
else => "<" ++ @tagName(scope) ++ ">",
+
};
+
const prefix = std.fmt.comptimePrint("{s}{s}: ", .{ level, scope_text });
+
+
{
+
stdout_lock.lock();
+
defer stdout_lock.unlock();
+
+
const cons = &common.init_data.console.?;
+
+
cons.setColor(color, 0);
+
cons.writer().print(prefix ++ format ++ "\n", args) catch return;
+
}
+
}
+
+
pub fn panic(msg: []const u8, first_trace_addr: ?usize) noreturn {
+
_ = first_trace_addr;
+
const log = std.log.scoped(.panic);
+
common.init_data.console.?.setColor(0xff0000, 0);
+
log.err("PANIC: {s}", .{msg});
+
var it = std.debug.StackIterator.init(@returnAddress(), @frameAddress());
+
defer it.deinit();
+
while (it.next()) |addr| {
+
if (addr == 0) break;
+
log.err("Addr: 0x{x:0>16}", .{addr});
+
}
+
arch.instructions.die();
+
}
+146
components/ukernel/common/mm/bootmem.zig
···
+
/// Simple Multi Bump Allocator, because the old buddy system wasn't
+
/// suitable to pass memory down to the root task, and we never
+
/// have to free memory
+
const std = @import("std");
+
const common = @import("../root.zig");
+
const arch = @import("arch");
+
const log = std.log.scoped(.bootmem);
+
+
/// This is meant for pageframe allocation, does bump allocation with
+
/// a first fit from the end. Ideally we do as little memory allocation
+
/// as possible in the microkernel anyways.
+
pub const BootPmm = struct {
+
// Store this in a chunk of ram which you remove from the region
+
regions: []Region = undefined,
+
//top_region: usize = undefined,
+
+
const Self = @This();
+
pub const Region = struct {
+
base: usize,
+
length: usize,
+
type: RegionType,
+
};
+
pub const RegionType = enum(usize) {
+
usable = 0,
+
acpi_reclaimable = 1,
+
bootloader_reclaimable = 2,
+
executable_and_modules = 3,
+
framebuffer = 4,
+
// Just forget about reserved regions for now
+
reserved = 5,
+
};
+
+
// Calculate the needed size to allocate the []Region slice
+
pub fn calculateMetadataSize(entries: usize) usize {
+
return entries * @sizeOf(Region);
+
}
+
+
/// For the love of god len(regions) must be > 0
+
/// Only call this function once
+
pub fn initialize(self: *Self, regions: []Region) void {
+
self.regions = regions;
+
//self.top_region = regions.len - 1;
+
}
+
+
/// Allocates physical memory, aligned to page size
+
pub fn allocPhys(self: *Self, size: usize) !usize {
+
if (self.regions.len == 0) {
+
@branchHint(.cold);
+
return error.NoMemory;
+
}
+
const true_alloc_size = std.mem.alignForward(usize, size, std.heap.pageSize());
+
+
var i: usize = self.regions.len;
+
while (i > 0) : (i -= 1) {
+
const region = &self.regions[i - 1];
+
if (region.type != .usable) continue;
+
if (true_alloc_size > region.length) continue;
+
region.length -= true_alloc_size;
+
return region.base + region.length;
+
}
+
return error.OutOfMemory;
+
}
+
+
pub fn allocMem(self: *Self, size: usize) !usize {
+
return try self.allocPhys(size) + common.init_data.hhdm_slide;
+
}
+
+
pub fn debugInfo(self: *Self) void {
+
var total: usize = 0;
+
var usable: usize = 0;
+
+
for (self.regions) |region| {
+
total += region.length;
+
if (region.type == .usable) usable += region.length;
+
}
+
+
const total_gib = total / (1 << 30);
+
total -= total_gib * (1 << 30);
+
const total_mib = total / (1 << 20);
+
total -= total_mib * (1 << 20);
+
const total_kib = total / (1 << 10);
+
log.debug("Total Memory: {} GiB + {} MiB + {} KiB", .{ total_gib, total_mib, total_kib });
+
+
const usable_gib = usable / (1 << 30);
+
usable -= usable_gib * (1 << 30);
+
const usable_mib = usable / (1 << 20);
+
usable -= usable_mib * (1 << 20);
+
const usable_kib = usable / (1 << 10);
+
log.debug("Usable: {} GiB + {} MiB + {} KiB", .{ usable_gib, usable_mib, usable_kib });
+
}
+
};
+
+
pub fn init() void {
+
const memmap = arch.boot.limine_requests.memmap.response.?.getEntries();
+
+
const bootmem_structure_size, const region_count = blk: {
+
var region_count: usize = 0;
+
for (memmap) |region| {
+
switch (region.type) {
+
.usable, .acpi_reclaimable, .bootloader_reclaimable, .executable_and_modules, .framebuffer => region_count += 1,
+
else => {},
+
}
+
}
+
+
break :blk .{ BootPmm.calculateMetadataSize(region_count), region_count };
+
};
+
const bootmem_pages = std.mem.alignForward(usize, bootmem_structure_size, std.heap.pageSize());
+
+
// Given the bootmem structure size, find a page to hold it
+
const bootmem_struct: []BootPmm.Region = blk: {
+
var i: usize = memmap.len;
+
while (i > 0) : (i -= 1) {
+
const region = memmap[i - 1];
+
if (bootmem_pages > region.length) continue;
+
switch (region.type) {
+
.usable => {},
+
else => continue,
+
}
+
// Unfortunately, we can't modify the limine memory map itself
+
// So, remember the region we ate from instead
+
region.length -= bootmem_pages;
+
const bootmem_struct_ptr = common.mm.physToHHDM([*]BootPmm.Region, region.base + region.length);
+
break :blk bootmem_struct_ptr[0..region_count];
+
}
+
@panic("Couldn't allocate bootmem structure!");
+
};
+
// Now, fill the bootmem structure out from the limine memmap
+
var i: usize = 0;
+
for (memmap) |region| {
+
switch (region.type) {
+
.usable => bootmem_struct[i].type = .usable,
+
.acpi_reclaimable => bootmem_struct[i].type = .acpi_reclaimable,
+
.bootloader_reclaimable => bootmem_struct[i].type = .bootloader_reclaimable,
+
.executable_and_modules => bootmem_struct[i].type = .executable_and_modules,
+
.framebuffer => bootmem_struct[i].type = .framebuffer,
+
else => continue,
+
}
+
bootmem_struct[i].base = region.base;
+
bootmem_struct[i].length = region.length;
+
i += 1;
+
}
+
+
// Finally, initialize the global bootmem
+
common.init_data.bootmem.initialize(bootmem_struct);
+
common.init_data.bootmem.debugInfo();
+
}
+117
components/ukernel/common/mm/paging.zig
···
+
const arch = @import("arch");
+
const std = @import("std");
+
const TableHandle = arch.mm.paging.TableHandle;
+
const MemoryType = arch.mm.paging.MemoryType;
+
+
pub const Perms = struct {
+
writable: bool,
+
executable: bool,
+
userspace_accessible: bool = false,
+
+
const Self = @This();
+
+
/// Verify that the current permissions are a superset of the provided ones
+
pub fn allows(self: Self, other: Self) bool {
+
if (!self.writable and other.writable) {
+
return false;
+
}
+
if (!self.executable and other.executable) {
+
return false;
+
}
+
if (!self.userspace_accessible and other.userspace_accessible) {
+
return false;
+
}
+
return true;
+
}
+
+
/// OR two permissions
+
pub fn addPerms(self: Self, other: Self) Self {
+
return .{
+
.writable = self.writable or other.writable,
+
.executable = self.executable or other.executable,
+
.userspace = self.userspace_accessible or other.userspace_accessible,
+
};
+
}
+
};
+
+
pub const PTEType = enum { Mapping, Table, Empty };
+
+
pub fn mapPhys(args: struct {
+
vaddr: usize,
+
paddr: usize,
+
size: usize,
+
perms: Perms,
+
memory_type: MemoryType,
+
}) !void {
+
const root = arch.mm.paging.root_table(args.vaddr);
+
var vaddr = args.vaddr;
+
var paddr = args.paddr;
+
var size = args.size;
+
try mapPageImpl(&vaddr, &paddr, &size, root, args.perms, args.memory_type);
+
}
+
+
fn mapPageImpl(
+
vaddr: *usize,
+
paddr: ?*usize,
+
size: *usize,
+
table: TableHandle,
+
perms: Perms,
+
memory_type: MemoryType,
+
) !void {
+
// 1. Get slice of every child from the target forwards
+
const children = table.skip_to(vaddr.*);
+
+
// 2. For each PTE, decode to the type (Mapping, Table, Empty)
+
// If there's already a mapping, we're fucked
+
// If it's a table, keep going forward till we reach Mapping or Empty,
+
// while of course ensuring permissions
+
// If it's empty, check if we reached our target level. If we didn't,
+
// then make a new child table and keep going. If it's not empty, then
+
// make the child mapping and reduce the amount of size we're targetting
+
for (children) |*child| {
+
switch (table.decode_child(child)) {
+
.Mapping => return error.AlreadyPresent,
+
.Table => |*tbl| {
+
try mapPageImpl(vaddr, paddr, size, tbl.*, perms, memory_type);
+
if (!tbl.perms.allows(perms)) {
+
tbl.addPerms(perms);
+
arch.mm.paging.invalidate(vaddr.*);
+
}
+
},
+
.Empty => {
+
const domain = table.child_domain(vaddr.*);
+
if (domain.ptr == vaddr.* and domain.len <= size.* and arch.mm.paging.can_map_at(table.level - 1) and is_aligned(vaddr.*, paddr, table.level - 1)) {
+
// Make child mapping etc
+
_ = try table.make_child_mapping(child, if (paddr) |p| p.* else null, perms, memory_type);
+
const step = domain.len;
+
if (step >= size.*) {
+
size.* = 0;
+
return;
+
} else {
+
size.* -= step;
+
vaddr.* += step;
+
if (paddr) |p| {
+
p.* += step;
+
}
+
}
+
} else {
+
const tbl = try table.make_child_table(child, perms);
+
try mapPageImpl(vaddr, paddr, size, tbl, perms, memory_type);
+
}
+
},
+
}
+
if (size.* == 0) return;
+
}
+
}
+
+
fn is_aligned(vaddr: usize, paddr: ?*usize, level: u3) bool {
+
if (!std.mem.isAligned(vaddr, arch.mm.paging.page_sizes[level])) {
+
return false;
+
}
+
+
if (paddr) |p| {
+
return std.mem.isAligned(p.*, arch.mm.paging.page_sizes[level]);
+
}
+
+
return true;
+
}
+7
components/ukernel/common/mm/root.zig
···
+
pub const bootmem = @import("bootmem.zig");
+
pub const common = @import("../root.zig");
+
pub const paging = @import("paging.zig");
+
+
pub fn physToHHDM(comptime T: type, phys_addr: u64) T {
+
return @ptrFromInt(common.init_data.hhdm_slide + phys_addr);
+
}
+5
components/ukernel/common/root.zig
···
+
pub const aux = @import("aux.zig");
+
pub const mm = @import("mm/root.zig");
+
+
// Arch init must set up appropriate fields!
+
pub var init_data: aux.InitState = .{};
+7
components/ukernel/deps/console/build.zig
···
+
const std = @import("std");
+
+
pub fn build(b: *std.Build) void {
+
_ = b.addModule("console", .{
+
.root_source_file = b.path("console.zig"),
+
});
+
}
+18
components/ukernel/deps/console/build.zig.zon
···
+
.{
+
.name = .console,
+
.fingerprint = 0x3603cfb6f7920fba,
+
.version = "0.0.1",
+
.minimum_zig_version = "0.15.1",
+
.dependencies = .{
+
.limine = .{
+
.path = "../limine-zig",
+
},
+
},
+
.paths = .{
+
"build.zig",
+
"build.zig.zon",
+
"console.zig",
+
"psf2.zig",
+
"fonts",
+
},
+
}
+195
components/ukernel/deps/console/console.zig
···
+
const limine = @import("limine");
+
const builtin = @import("builtin");
+
const psf2 = @import("psf2.zig");
+
pub const Font = psf2.Font;
+
const std = @import("std");
+
const fontdata = @embedFile("fonts/spleen-12x24.psf");
+
const are_we_le = builtin.cpu.arch.endian() == .little;
+
+
pub const DefaultFont = Font.new(fontdata) catch unreachable;
+
+
pub const Framebuffer = struct {
+
const Self = @This();
+
address: [*]u8,
+
width: u64,
+
height: u64,
+
pitch: u64,
+
bypp: u16,
+
red_mask_size: u8,
+
red_mask_shift: u8,
+
green_mask_size: u8,
+
green_mask_shift: u8,
+
blue_mask_size: u8,
+
blue_mask_shift: u8,
+
+
pub fn from_limine(fb: *const limine.Framebuffer) Self {
+
return .{
+
.address = @ptrCast(fb.address),
+
.width = fb.width,
+
.height = fb.height,
+
.pitch = fb.pitch,
+
.red_mask_size = fb.red_mask_size,
+
.red_mask_shift = fb.red_mask_shift,
+
.green_mask_size = fb.green_mask_size,
+
.green_mask_shift = fb.green_mask_shift,
+
.blue_mask_size = fb.blue_mask_size,
+
.blue_mask_shift = fb.blue_mask_shift,
+
.bypp = fb.bpp / 8,
+
};
+
}
+
};
+
+
pub const Console = struct {
+
const Self = @This();
+
const Writer = std.io.GenericWriter(*Self, error{}, write);
+
// framebuffer data
+
fb: Framebuffer,
+
// font
+
font: psf2.Font,
+
// state data
+
current_x: u64 = 0,
+
current_y: u64 = 0,
+
fg_color: u32 = 0xFFFFFFFF,
+
bg_color: u32 = 0,
+
+
pub fn from_font(fb: Framebuffer, font: psf2.Font) Self {
+
return .{
+
.fb = fb,
+
.font = font,
+
};
+
}
+
+
// places a character at the given position
+
pub fn putchar(self: *const Self, ch: u8, cx: u64, cy: u64, fg_val: u32, bg_val: u32) void {
+
// convert colors to bytes
+
const fg_bytes: [4]u8 = @bitCast(if (are_we_le) fg_val else @byteSwap(fg_val));
+
const bg_bytes: [4]u8 = @bitCast(if (are_we_le) bg_val else @byteSwap(bg_val));
+
// initial calculations
+
const bytes_per_line = self.font.hdr.bytesPerLine();
+
const mask_shamt: u5 = @intCast(bytes_per_line * 8 - 1);
+
const mask_initial: u32 = @as(u32, 1) << mask_shamt;
+
const glyph = self.font.getGlyph(ch) catch return;
+
+
// find the screen offset for the beignning of the character
+
// add pitch to go to next line...
+
var offset: u64 = (cy * self.font.hdr.height * self.fb.pitch) + (cx * (self.font.hdr.width + 0) * self.fb.bypp);
+
// run for every line
+
var y: u32 = 0;
+
var mask: u32 = 0;
+
while (y < self.font.hdr.height) : (y += 1) {
+
// initialize the mask and the current line
+
mask = mask_initial;
+
+
// get the current line
+
const line_value: u32 = psf2.readIntTo32(glyph[y * bytes_per_line ..][0..bytes_per_line]);
+
var line_offset: u64 = offset;
+
var x: u32 = 0;
+
while (x < self.font.hdr.width) : (x += 1) {
+
// write the pixel value to the correct position of the screen...
+
if (line_value & mask != 0) {
+
@memcpy(self.fb.address[line_offset..][0..self.fb.bypp], fg_bytes[0..]);
+
} else {
+
@memcpy(self.fb.address[line_offset..][0..self.fb.bypp], bg_bytes[0..]);
+
}
+
line_offset += self.fb.bypp;
+
mask >>= 1;
+
}
+
offset += self.fb.pitch;
+
}
+
}
+
+
pub fn putc(self: *Self, ch: u8) void {
+
// input can be \r, \n, or printable.
+
// ignore \r, move down for \n, and print char normally
+
// if \n, check to see if we overrun then scroll
+
// if normal, see if we overrun the end and do newline
+
if (ch == '\r') return;
+
if (ch == '\n') {
+
self.current_x = 0;
+
self.current_y += 1;
+
// go to top if we went below the bottom of the screen
+
if (self.current_y >= self.maxCharsHeight()) {
+
self.scrollUp(1);
+
self.current_y = self.maxCharsHeight() - 1;
+
}
+
return;
+
}
+
self.putchar(ch, self.current_x, self.current_y, self.fg_color, self.bg_color);
+
self.current_x += 1;
+
+
if (self.current_x < self.maxCharsWidth()) return;
+
self.current_x = 0;
+
self.current_y += 1;
+
if (self.current_y >= self.maxCharsHeight()) {
+
self.scrollUp(1);
+
self.current_y = self.maxCharsHeight() - 1;
+
}
+
}
+
+
pub fn puts(self: *Self, msg: []const u8) void {
+
for (msg) |ch| {
+
self.putc(ch);
+
}
+
}
+
+
fn convertColor(self: *const Self, color: u32) u32 {
+
const mult: u32 = blk: {
+
const width: u4 = @truncate(self.fb.red_mask_size);
+
break :blk (@as(u32, 1) << width) - 1;
+
};
+
const div = 255;
+
const red: u32 = (color >> 16) & 0xFF;
+
const green: u32 = (color >> 8) & 0xFF;
+
const blue: u32 = color & 0xFF;
+
+
const red_shift: u5 = @truncate(self.fb.red_mask_shift);
+
const green_shift: u5 = @truncate(self.fb.green_mask_shift);
+
const blue_shift: u5 = @truncate(self.fb.blue_mask_shift);
+
+
return (((red * mult) / div) << red_shift) | (((green * mult) / div) << green_shift) | (((blue * mult) / div) << blue_shift);
+
}
+
+
pub fn setColor(self: *Self, fg: u32, bg: u32) void {
+
self.fg_color = self.convertColor(fg);
+
self.bg_color = self.convertColor(bg);
+
}
+
+
pub fn writer(self: *Self) Writer {
+
return .{ .context = self };
+
}
+
+
pub fn write(self: *Self, buffer: []const u8) !usize {
+
self.puts(buffer);
+
return buffer.len;
+
}
+
+
// scroll the lines of text, without doing anything else.
+
// erase the first line of text, and memcpy the second line and on up to the first
+
pub fn scrollUp(self: *Self, amount: u64) void {
+
const num_lines = self.maxCharsHeight();
+
const h = self.font.hdr.height;
+
if (amount > num_lines) return; // later just clear the entire screen
+
var i: u64 = amount;
+
while (i < num_lines) : (i += 1) {
+
// for each run, erase the previous line and copy the current line up a line.
+
// const curr_line = self.fb.address[i * h * self.fb.pitch .. (i + 1) * h * self.fb.pitch];
+
const curr_line = self.fb.address[i * h * self.fb.pitch ..][0 .. h * self.fb.pitch];
+
const prev_line = self.fb.address[(i - amount) * h * self.fb.pitch ..][0 .. h * self.fb.pitch];
+
+
@memset(prev_line, 0);
+
@memcpy(prev_line, curr_line);
+
}
+
// finally, delete the last line (s)
+
// const last_line = self.fb.address[(num_lines - amount) * h * self.fb.pitch .. (num_lines) * h * self.fb.pitch];
+
const last_line = self.fb.address[(num_lines - amount) * h * self.fb.pitch ..][0 .. amount * h * self.fb.pitch];
+
@memset(last_line, 0);
+
}
+
+
fn maxCharsWidth(self: *const Self) u64 {
+
return self.fb.width / (self.font.hdr.width + 0);
+
}
+
+
fn maxCharsHeight(self: *const Self) u64 {
+
return self.fb.height / self.font.hdr.height;
+
}
+
};
+24
components/ukernel/deps/console/fonts/LICENSE.spleen
···
+
Copyright (c) 2018-2024, Frederic Cambus
+
All rights reserved.
+
+
Redistribution and use in source and binary forms, with or without
+
modification, are permitted provided that the following conditions are met:
+
+
* Redistributions of source code must retain the above copyright
+
notice, this list of conditions and the following disclaimer.
+
+
* Redistributions in binary form must reproduce the above copyright
+
notice, this list of conditions and the following disclaimer in the
+
documentation and/or other materials provided with the distribution.
+
+
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+
ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS
+
BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+
POSSIBILITY OF SUCH DAMAGE.
components/ukernel/deps/console/fonts/bold16x32.psf

This is a binary file and will not be displayed.

components/ukernel/deps/console/fonts/bold8x16.psf

This is a binary file and will not be displayed.

components/ukernel/deps/console/fonts/spleen-12x24.psf

This is a binary file and will not be displayed.

components/ukernel/deps/console/fonts/spleen-16x32.psf

This is a binary file and will not be displayed.

components/ukernel/deps/console/fonts/spleen-32x64.psf

This is a binary file and will not be displayed.

components/ukernel/deps/console/fonts/spleen-5x8.psf

This is a binary file and will not be displayed.

components/ukernel/deps/console/fonts/spleen-6x12.psf

This is a binary file and will not be displayed.

components/ukernel/deps/console/fonts/spleen-8x16.psf

This is a binary file and will not be displayed.

+53
components/ukernel/deps/console/psf2.zig
···
+
const std = @import("std");
+
+
pub const Font = struct {
+
const Self = @This();
+
pub const PsfHeader = extern struct {
+
magic: u32 = 0x864ab572,
+
version: u32,
+
header_size: u32,
+
flags: u32,
+
numglyph: u32,
+
bytes_per_glyph: u32,
+
height: u32,
+
width: u32,
+
+
pub fn bytesPerLine(self: *const PsfHeader) u32 {
+
return (self.width + 7) / 8;
+
}
+
};
+
+
fontdata: []const u8,
+
hdr: PsfHeader,
+
+
pub fn new(fontdata: []const u8) !Self {
+
var ret: Self = undefined;
+
ret.fontdata = fontdata;
+
+
// fill the header properly
+
const hdr_size = @sizeOf(PsfHeader);
+
if (fontdata.len < hdr_size) return error.TooSmall;
+
const hdr_ptr: [*]u8 = @ptrCast(&ret.hdr);
+
@memcpy(hdr_ptr[0..hdr_size], fontdata[0..hdr_size]);
+
+
return ret;
+
}
+
+
pub fn getGlyph(self: *const Self, ch: u8) ![]const u8 {
+
const startpos: u64 = self.hdr.header_size + ch * self.hdr.bytes_per_glyph;
+
const endpos: u64 = startpos + self.hdr.bytes_per_glyph;
+
+
if (self.fontdata.len < endpos) return error.InvalidCharacter;
+
return self.fontdata[startpos..endpos];
+
}
+
};
+
pub fn readIntTo32(buffer: []const u8) u32 {
+
const readInt = std.mem.readInt;
+
return switch (buffer.len) {
+
0 => 0,
+
1 => @intCast(readInt(u8, buffer[0..1], .big)),
+
2 => @intCast(readInt(u16, buffer[0..2], .big)),
+
3 => @intCast(readInt(u24, buffer[0..3], .big)),
+
else => @intCast(readInt(u32, buffer[0..4], .big)),
+
};
+
}
+1
components/ukernel/deps/limine-zig/.gitignore
···
+
/.zig-cache
+22
components/ukernel/deps/limine-zig/LICENSE
···
+
Copyright (C) 2022-2024 48cf <iretq@riseup.net> and contributors.
+
+
Redistribution and use in source and binary forms, with or without
+
modification, are permitted provided that the following conditions are met:
+
+
1. Redistributions of source code must retain the above copyright notice, this
+
list of conditions and the following disclaimer.
+
+
2. Redistributions in binary form must reproduce the above copyright notice,
+
this list of conditions and the following disclaimer in the documentation
+
and/or other materials provided with the distribution.
+
+
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
+
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
+
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
+
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+35
components/ukernel/deps/limine-zig/README.md
···
+
# limine-zig
+
+
Zig bindings for the [The Limine Boot Protocol](https://github.com/limine-bootloader/limine/blob/trunk/PROTOCOL.md).
+
+
To use this library, add it to your `build.zig.zon` file manually or use `zig fetch`:
+
+
```sh
+
zig fetch --save git+https://github.com/48cf/limine-zig#trunk
+
```
+
+
Then, import the library in your `build.zig`:
+
+
```zig
+
const limine_zig = b.dependency("limine_zig", .{
+
// The API revision of the Limine Boot Protocol to use, if not provided
+
// it defaults to 0. Newer revisions may change the behavior of the bootloader.
+
.api_revision = 3,
+
// Whether to allow using deprecated features of the Limine Boot Protocol.
+
// If set to false, the build will fail if deprecated features are used.
+
.allow_deprecated = false,
+
// Whether to expose pointers in the API. When set to true, any field
+
// that is a pointer will be exposed as a raw address instead.
+
.no_pointers = false,
+
});
+
+
// Get the Limine module
+
const limine_module = limine_zig.module("limine");
+
+
// Import the Limine module into the kernel
+
kernel.addImport("limine", limine_module);
+
```
+
+
You can find an example kernel using this library [here](https://github.com/48cf/limine-zig-template).
+
+
To use this library, you need at least Zig 0.14.0.
+17
components/ukernel/deps/limine-zig/build.zig
···
+
const std = @import("std");
+
+
pub fn build(b: *std.Build) void {
+
const api_revision = b.option(u32, "api_revision", "Limine API revision to use");
+
const allow_deprecated = b.option(bool, "allow_deprecated", "Whether to allow deprecated features");
+
const no_pointers = b.option(bool, "no_pointers", "Whether to expose pointers as addresses");
+
+
const config = b.addOptions();
+
config.addOption(u32, "api_revision", api_revision orelse 0);
+
config.addOption(bool, "allow_deprecated", allow_deprecated orelse false);
+
config.addOption(bool, "no_pointers", no_pointers orelse false);
+
+
const module = b.addModule("limine", .{
+
.root_source_file = b.path("src/root.zig"),
+
});
+
module.addImport("config", config.createModule());
+
}
+14
components/ukernel/deps/limine-zig/build.zig.zon
···
+
.{
+
.name = .limine_zig,
+
.version = "0.0.0",
+
.fingerprint = 0x439c475b40f038d5,
+
.minimum_zig_version = "0.14.0",
+
.dependencies = .{},
+
.paths = .{
+
"build.zig",
+
"build.zig.zon",
+
"src",
+
"LICENSE",
+
"README.md",
+
},
+
}
+988
components/ukernel/deps/limine-zig/src/root.zig
···
+
const builtin = @import("builtin");
+
const config = @import("config");
+
const std = @import("std");
+
+
pub const Arch = enum {
+
x86_64,
+
aarch64,
+
riscv64,
+
loongarch64,
+
};
+
+
pub const api_revision = config.api_revision;
+
pub const arch: Arch = switch (builtin.cpu.arch) {
+
.x86_64 => .x86_64,
+
.aarch64 => .aarch64,
+
.riscv64 => .riscv64,
+
.loongarch64 => .loongarch64,
+
else => |arch_tag| @compileError("Unsupported architecture: " ++ @tagName(arch_tag)),
+
};
+
+
fn id(a: u64, b: u64) [4]u64 {
+
return .{ 0xc7b1dd30df4c8b88, 0x0a82e883a194f07b, a, b };
+
}
+
+
fn LiminePtr(comptime Type: type) type {
+
return if (config.no_pointers) u64 else Type;
+
}
+
+
const init_pointer = if (config.no_pointers)
+
0
+
else
+
null;
+
+
pub const RequestsStartMarker = extern struct {
+
marker: [4]u64 = .{
+
0xf6b8f4b39de7d1ae,
+
0xfab91a6940fcb9cf,
+
0x785c6ed015d3e316,
+
0x181e920a7852b9d9,
+
},
+
};
+
+
pub const RequestsEndMarker = extern struct {
+
marker: [2]u64 = .{ 0xadc0e0531bb10d03, 0x9572709f31764c62 },
+
};
+
+
pub const BaseRevision = extern struct {
+
magic: [2]u64 = .{ 0xf9562b2d5c95a6c8, 0x6a7b384944536bdc },
+
revision: u64,
+
+
pub fn init(revision: u64) @This() {
+
return .{ .revision = revision };
+
}
+
+
pub fn loadedRevision(self: @This()) u64 {
+
return self.magic[1];
+
}
+
+
pub fn isValid(self: @This()) bool {
+
return self.magic[1] != 0x6a7b384944536bdc;
+
}
+
+
pub fn isSupported(self: @This()) bool {
+
return self.revision == 0;
+
}
+
};
+
+
pub const Uuid = extern struct {
+
a: u32,
+
b: u16,
+
c: u16,
+
d: [8]u8,
+
};
+
+
pub const MediaType = enum(u32) {
+
generic = 0,
+
optical = 1,
+
tftp = 2,
+
_,
+
};
+
+
const LimineFileV1 = extern struct {
+
revision: u64,
+
address: LiminePtr(*align(4096) anyopaque),
+
size: u64,
+
path: LiminePtr([*:0]u8),
+
cmdline: LiminePtr([*:0]u8),
+
media_type: MediaType,
+
unused: u32,
+
tftp_ip: u32,
+
tftp_port: u32,
+
partition_index: u32,
+
mbr_disk_id: u32,
+
gpt_disk_uuid: Uuid,
+
gpt_part_uuid: Uuid,
+
part_uuid: Uuid,
+
};
+
+
const LimineFileV2 = extern struct {
+
revision: u64,
+
address: LiminePtr(*align(4096) anyopaque),
+
size: u64,
+
path: LiminePtr([*:0]u8),
+
string: LiminePtr([*:0]u8),
+
media_type: MediaType,
+
unused: u32,
+
tftp_ip: u32,
+
tftp_port: u32,
+
partition_index: u32,
+
mbr_disk_id: u32,
+
gpt_disk_uuid: Uuid,
+
gpt_part_uuid: Uuid,
+
part_uuid: Uuid,
+
};
+
+
pub const File = if (config.api_revision >= 3)
+
LimineFileV2
+
else
+
LimineFileV1;
+
+
// Boot info
+
+
pub const BootloaderInfoResponse = extern struct {
+
revision: u64,
+
name: LiminePtr([*:0]u8),
+
version: LiminePtr([*:0]u8),
+
};
+
+
pub const BootloaderInfoRequest = extern struct {
+
id: [4]u64 = id(0xf55038d8e2a1202f, 0x279426fcf5f59740),
+
revision: u64 = 0,
+
response: LiminePtr(?*BootloaderInfoResponse) = init_pointer,
+
};
+
+
// Executable command line
+
+
pub const ExecutableCmdlineResponse = extern struct {
+
revision: u64,
+
cmdline: LiminePtr([*:0]u8),
+
};
+
+
pub const ExecutableCmdlineRequest = extern struct {
+
id: [4]u64 = id(0x4b161536e598651e, 0xb390ad4a2f1f303a),
+
revision: u64 = 0,
+
response: LiminePtr(?*ExecutableCmdlineResponse) = init_pointer,
+
};
+
+
// Firmware type
+
+
pub const FirmwareType = enum(u64) {
+
x86_bios = 0,
+
uefi32 = 1,
+
uefi64 = 2,
+
sbi = 3,
+
_,
+
};
+
+
pub const FirmwareTypeResponse = extern struct {
+
revision: u64,
+
firmware_type: FirmwareType,
+
};
+
+
pub const FirmwareTypeRequest = extern struct {
+
id: [4]u64 = id(0x8c2f75d90bef28a8, 0x7045a4688eac00c3),
+
revision: u64 = 0,
+
response: LiminePtr(?*FirmwareTypeResponse) = init_pointer,
+
};
+
+
// Stack size
+
+
pub const StackSizeResponse = extern struct {
+
revision: u64,
+
};
+
+
pub const StackSizeRequest = extern struct {
+
id: [4]u64 = id(0x224ef0460a8e8926, 0xe1cb0fc25f46ea3d),
+
revision: u64 = 0,
+
response: LiminePtr(?*StackSizeResponse) = init_pointer,
+
stack_size: u64,
+
};
+
+
// HHDM
+
+
pub const HhdmResponse = extern struct {
+
revision: u64,
+
offset: u64,
+
};
+
+
pub const HhdmRequest = extern struct {
+
id: [4]u64 = id(0x48dcf1cb8ad2b852, 0x63984e959a98244b),
+
revision: u64 = 0,
+
response: LiminePtr(?*HhdmResponse) = init_pointer,
+
};
+
+
// Framebuffer
+
+
pub const FramebufferMemoryModel = enum(u8) {
+
rgb = 1,
+
_,
+
};
+
+
pub const VideoMode = extern struct {
+
pitch: u64,
+
width: u64,
+
height: u64,
+
bpp: u16,
+
memory_model: FramebufferMemoryModel,
+
red_mask_size: u8,
+
red_mask_shift: u8,
+
green_mask_size: u8,
+
green_mask_shift: u8,
+
blue_mask_size: u8,
+
blue_mask_shift: u8,
+
};
+
+
pub const Framebuffer = extern struct {
+
address: LiminePtr(*anyopaque),
+
width: u64,
+
height: u64,
+
pitch: u64,
+
bpp: u16,
+
memory_model: FramebufferMemoryModel,
+
red_mask_size: u8,
+
red_mask_shift: u8,
+
green_mask_size: u8,
+
green_mask_shift: u8,
+
blue_mask_size: u8,
+
blue_mask_shift: u8,
+
edid_size: u64,
+
edid: LiminePtr(?*anyopaque),
+
// Response revision 1
+
mode_count: u64,
+
modes: LiminePtr([*]*VideoMode),
+
+
/// Helper function to retrieve the EDID data as a slice.
+
/// This function will return null if the EDID size is 0 or if
+
/// the EDID pointer is null.
+
pub fn getEdid(self: @This()) ?[*]u8 {
+
if (self.edid_size == 0 or self.edid == null) {
+
return null;
+
}
+
return @as([*]u8, self.edid.?)[0..self.edid_size];
+
}
+
+
/// Helper function to retrieve a slice of the modes array.
+
/// This function is only available since revision 1 of the response and
+
/// will return an error if called with an older response. This is to
+
/// prevent the user from possibly accessing uninitialized memory.
+
pub fn getModes(self: @This(), response: *FramebufferResponse) ![]*VideoMode {
+
if (response.revision < 1) {
+
return error.NotSupported;
+
}
+
return self.modes[0..self.mode_count];
+
}
+
};
+
+
pub const FramebufferResponse = extern struct {
+
revision: u64,
+
framebuffer_count: u64,
+
framebuffers: LiminePtr(?[*]*Framebuffer),
+
+
/// Helper function to retrieve a slice of the framebuffers array.
+
/// This function will return null if the framebuffer count is 0 or if
+
/// the framebuffers pointer is null.
+
pub fn getFramebuffers(self: @This()) []*Framebuffer {
+
if (self.framebuffer_count == 0 or self.framebuffers == null) {
+
return &.{};
+
}
+
return self.framebuffers.?[0..self.framebuffer_count];
+
}
+
};
+
+
pub const FramebufferRequest = extern struct {
+
id: [4]u64 = id(0x9d5827dcd881dd75, 0xa3148604f6fab11b),
+
revision: u64 = 1,
+
response: LiminePtr(?*FramebufferResponse) = init_pointer,
+
};
+
+
// Terminal
+
+
const TerminalDeprecated = struct {
+
const deprecation_message =
+
\\The Terminal feature was deprecated and is no longer available.
+
\\Kernels are encouraged to manually implement terminal support
+
\\using the Framebuffer feature instead. If you need an easy to
+
\\integrate solution, consider using https://github.com/mintsuki/flanterm.
+
;
+
+
pub const TerminalCallbackType = @compileError(deprecation_message);
+
pub const TerminalCallbackEscapeParams = @compileError(deprecation_message);
+
pub const TerminalCallbackPosReportParams = @compileError(deprecation_message);
+
pub const TerminalCallbackKbdLedsState = @compileError(deprecation_message);
+
pub const TerminalCallbackKbdLedsParams = @compileError(deprecation_message);
+
pub const TerminalWrite = @compileError(deprecation_message);
+
pub const TerminalCallback = @compileError(deprecation_message);
+
pub const Terminal = @compileError(deprecation_message);
+
pub const TerminalResponse = @compileError(deprecation_message);
+
pub const TerminalRequest = @compileError(deprecation_message);
+
};
+
+
pub const TerminalFeature = if (config.allow_deprecated) struct {
+
pub const TerminalCallbackType = enum(u64) {
+
dec = 10,
+
bell = 20,
+
private_id = 30,
+
status_report = 40,
+
pos_report = 50,
+
kbd_leds = 60,
+
mode = 70,
+
linux = 80,
+
_,
+
};
+
+
pub const TerminalCallbackEscapeParams = struct {
+
a1: u64,
+
a2: u64,
+
a3: u64,
+
+
/// Initialize a TerminalCallbackEscapeParams struct, which is used for
+
/// decoding the parameters of the terminal callbacks that handle escape sequences.
+
pub fn init(a1: u64, a2: u64, a3: u64) @This() {
+
return .{ .a1 = a1, .a2 = a2, .a3 = a3 };
+
}
+
+
/// Retrieve the array of values passed to the escape sequence.
+
pub fn values(self: @This()) []u32 {
+
const values_ptr: [*]u32 = @intFromPtr(self.a2);
+
return values_ptr[0..self.a1];
+
}
+
+
/// Retrieve the final character in a DEC or ECMA-48 Mode Switch
+
/// escape sequence.
+
/// This is the character that is used to determine the type of
+
/// sequence that was sent, usually 'h' or 'l'.
+
pub fn finalChar(self: @This()) u8 {
+
return @intCast(self.a3);
+
}
+
};
+
+
pub const TerminalCallbackPosReportParams = struct {
+
a1: u64,
+
a2: u64,
+
+
/// Initialize a TerminalCallbackPosReportParams struct, which is used for
+
/// decoding the parameters of a position report terminal callback.
+
pub fn init(a1: u64, a2: u64) @This() {
+
return .{ .a1 = a1, .a2 = a2 };
+
}
+
+
/// Retrieve the X position of the cursor.
+
pub fn x(self: @This()) u64 {
+
return self.a1;
+
}
+
+
/// Retrieve the Y position of the cursor.
+
pub fn y(self: @This()) u64 {
+
return self.a2;
+
}
+
};
+
+
pub const TerminalCallbackKbdLedsState = enum(u64) {
+
clear_all = 0,
+
set_scroll_lock = 1,
+
set_num_lock = 2,
+
set_caps_lock = 3,
+
_,
+
};
+
+
pub const TerminalCallbackKbdLedsParams = struct {
+
a1: u64,
+
+
/// Initialize a TerminalCallbackKbdLedsParams struct, which is used for
+
/// decoding the parameters of a keyboard LEDs terminal callback.
+
pub fn init(a1: u64) @This() {
+
return .{ .a1 = a1 };
+
}
+
+
/// Retrieve the state of the Caps Lock LED.
+
pub fn state(self: @This()) TerminalCallbackKbdLedsState {
+
return @enumFromInt(self.a1);
+
}
+
};
+
+
pub const TerminalWrite = *const fn (*Terminal, [*]const u8, u64) callconv(.c) void;
+
+
pub const TerminalCallback = *const fn (*Terminal, TerminalCallbackType, u64, u64, u64) callconv(.c) void;
+
+
pub const Terminal = extern struct {
+
columns: u64,
+
rows: u64,
+
framebuffer: LiminePtr(?*Framebuffer),
+
};
+
+
pub const TerminalResponse = extern struct {
+
revision: u64,
+
terminal_count: u64,
+
terminals: LiminePtr(?[*]*Terminal),
+
write_fn: LiminePtr(TerminalWrite),
+
+
/// Helper function to retrieve a slice of the terminals array.
+
/// This function will return null if the terminal count is 0 or if
+
/// the terminals pointer is null.
+
pub fn getTerminals(self: @This()) []*Terminal {
+
if (self.terminal_count == 0 or self.terminals == null) {
+
return &.{};
+
}
+
return self.terminals.?[0..self.terminal_count];
+
}
+
+
/// Helper function to write to a terminal.
+
pub fn write(self: @This(), terminal: *Terminal, data: []const u8) void {
+
const write_fn: TerminalWrite = if (config.no_pointers)
+
@ptrFromInt(self.write_fn)
+
else
+
self.write_fn;
+
+
write_fn(terminal, data.ptr, data.len);
+
}
+
};
+
+
pub const TerminalRequest = extern struct {
+
id: [4]u64 = id(0xc8ac59310c2b0844, 0xa68d0c7265d38878),
+
revision: u64 = 0,
+
response: LiminePtr(?*TerminalResponse) = init_pointer,
+
callback: LiminePtr(?TerminalCallback),
+
};
+
} else TerminalDeprecated;
+
+
// Paging mode
+
+
pub const PagingMode = switch (arch) {
+
.x86_64 => enum(u64) {
+
@"4lvl",
+
@"5lvl",
+
_,
+
+
const min: @This() = .@"4lvl";
+
const max: @This() = .@"5lvl";
+
const default: @This() = .@"4lvl";
+
},
+
.aarch64 => enum(u64) {
+
@"4lvl",
+
@"5lvl",
+
_,
+
+
const min: @This() = .@"4lvl";
+
const max: @This() = .@"5lvl";
+
const default: @This() = .@"4lvl";
+
},
+
.riscv64 => enum(u64) {
+
sv39,
+
sv48,
+
sv57,
+
_,
+
+
const min: @This() = .sv39;
+
const max: @This() = .sv57;
+
const default: @This() = .sv48;
+
},
+
.loongarch64 => enum(u64) {
+
@"4lvl",
+
_,
+
+
const min: @This() = .@"4lvl";
+
const max: @This() = .@"4lvl";
+
const default: @This() = .@"4lvl";
+
},
+
};
+
+
pub const PagingModeResponse = extern struct {
+
revision: u64,
+
mode: PagingMode,
+
};
+
+
pub const PagingModeRequest = extern struct {
+
id: [4]u64 = id(0x95c1a0edab0944cb, 0xa4e5cb3842f7488a),
+
revision: u64 = 0,
+
response: LiminePtr(?*PagingModeResponse) = init_pointer,
+
mode: PagingMode = .default,
+
max_mode: PagingMode = .max,
+
min_mode: PagingMode = .min,
+
};
+
+
// 5-level paging
+
+
const FiveLevelPagingDeprecated = struct {
+
const deprecation_message =
+
\\The 5-level paging feature was deprecated and is no longer available.
+
\\Kernels are encouraged to manually request 5-level paging support
+
\\using the Paging mode feature instead.
+
;
+
+
pub const FiveLevelPagingResponse = @compileError(deprecation_message);
+
pub const FiveLevelPagingRequest = @compileError(deprecation_message);
+
};
+
+
pub const FiveLevelPagingFeature = if (config.allow_deprecated) struct {
+
pub const FiveLevelPagingResponse = extern struct {
+
revision: u64,
+
};
+
+
pub const FiveLevelPagingRequest = extern struct {
+
id: [4]u64 = id(0x94469551da9b3192, 0xebe5e86db7382888),
+
revision: u64 = 0,
+
response: LiminePtr(?*FiveLevelPagingResponse) = init_pointer,
+
};
+
} else FiveLevelPagingDeprecated;
+
+
// MP (formerly SMP)
+
+
pub const GotoAddress = *const fn (*SmpMpInfo) callconv(.c) noreturn;
+
+
const SmpMpFlags = switch (arch) {
+
.x86_64 => packed struct(u32) {
+
x2apic: bool = false,
+
reserved: u31 = 0,
+
},
+
.aarch64, .riscv64, .loongarch64 => packed struct(u64) {
+
reserved: u64 = 0,
+
},
+
};
+
+
const SmpMpInfo = switch (arch) {
+
.x86_64 => extern struct {
+
processor_id: u32,
+
lapic_id: u32,
+
reserved: u64,
+
goto_address: LiminePtr(?GotoAddress),
+
extra_argument: u64,
+
},
+
.aarch64 => extern struct {
+
processor_id: u32,
+
mpidr: u64,
+
reserved: u64,
+
goto_address: LiminePtr(?GotoAddress),
+
extra_argument: u64,
+
},
+
.riscv64 => extern struct {
+
processor_id: u64,
+
hartid: u64,
+
reserved: u64,
+
goto_address: LiminePtr(?GotoAddress),
+
extra_argument: u64,
+
},
+
.loongarch64 => extern struct {
+
reserved: u64,
+
},
+
};
+
+
const SmpMpResponse = switch (arch) {
+
.x86_64 => extern struct {
+
revision: u64,
+
flags: SmpMpFlags,
+
bsp_lapic_id: u32,
+
cpu_count: u64,
+
cpus: LiminePtr(?[*]*SmpMpInfo),
+
+
/// Helper function to retrieve a slice of the CPUs array.
+
/// This function will return null if the CPU count is 0 or if
+
/// the CPUs pointer is null.
+
pub fn getCpus(self: @This()) []*SmpMpInfo {
+
if (self.cpu_count == 0 or self.cpus == null) {
+
return &.{};
+
}
+
return self.cpus.?[0..self.cpu_count];
+
}
+
},
+
.aarch64 => extern struct {
+
revision: u64,
+
flags: SmpMpFlags,
+
bsp_mpidr: u64,
+
cpu_count: u64,
+
cpus: LiminePtr(?[*]*SmpMpInfo),
+
+
/// Helper function to retrieve a slice of the CPUs array.
+
/// This function will return null if the CPU count is 0 or if
+
/// the CPUs pointer is null.
+
pub fn getCpus(self: @This()) []*SmpMpInfo {
+
if (self.cpu_count == 0 or self.cpus == null) {
+
return &.{};
+
}
+
return self.cpus.?[0..self.cpu_count];
+
}
+
},
+
.riscv64 => extern struct {
+
revision: u64,
+
flags: SmpMpFlags,
+
bsp_hartid: u64,
+
cpu_count: u64,
+
cpus: LiminePtr(?[*]*SmpMpInfo),
+
+
/// Helper function to retrieve a slice of the CPUs array.
+
/// This function will return null if the CPU count is 0 or if
+
/// the CPUs pointer is null.
+
pub fn getCpus(self: @This()) []*SmpMpInfo {
+
if (self.cpu_count == 0 or self.cpus == null) {
+
return &.{};
+
}
+
return self.cpus.?[0..self.cpu_count];
+
}
+
},
+
.loongarch64 => extern struct {
+
cpu_count: u64,
+
cpus: LiminePtr(?[*]*SmpMpInfo),
+
+
/// Helper function to retrieve a slice of the CPUs array.
+
/// This function will return null if the CPU count is 0 or if
+
/// the CPUs pointer is null.
+
pub fn getCpus(self: @This()) []*SmpMpInfo {
+
if (self.cpu_count == 0 or self.cpus == null) {
+
return &.{};
+
}
+
return self.cpus.?[0..self.cpu_count];
+
}
+
},
+
};
+
+
const SmpMpRequest = extern struct {
+
id: [4]u64 = id(0x95a67b819a1b857e, 0xa0b61b723b6a73e0),
+
revision: u64 = 0,
+
response: LiminePtr(?*SmpMpResponse) = init_pointer,
+
// The `flags` field in the request is 64-bit on *all* platforms, even
+
// though the flags enum is 32-bit on x86_64. This is to ensure that the
+
// struct is not too small on x86_64 there is a `reserved: u32` field after it.
+
flags: SmpMpFlags = .{},
+
reserved: u32 = 0,
+
};
+
+
const MpFeature = struct {
+
pub const MpFlags = SmpMpFlags;
+
pub const MpInfo = SmpMpInfo;
+
pub const MpResponse = SmpMpResponse;
+
pub const MpRequest = SmpMpRequest;
+
};
+
+
const SmpFeature = struct {
+
pub const SmpFlags = SmpMpFlags;
+
pub const SmpInfo = SmpMpInfo;
+
pub const SmpResponse = SmpMpResponse;
+
pub const SmpRequest = SmpMpRequest;
+
};
+
+
pub const SmpMpFeature = if (config.api_revision >= 1)
+
MpFeature
+
else
+
SmpFeature;
+
+
// Memory map
+
+
const MemoryMapTypeV1 = enum(u64) {
+
usable = 0,
+
reserved = 1,
+
acpi_reclaimable = 2,
+
acpi_nvs = 3,
+
bad_memory = 4,
+
bootloader_reclaimable = 5,
+
kernel_and_modules = 6,
+
framebuffer = 7,
+
_,
+
};
+
+
const MemoryMapTypeV2 = enum(u64) {
+
usable = 0,
+
reserved = 1,
+
acpi_reclaimable = 2,
+
acpi_nvs = 3,
+
bad_memory = 4,
+
bootloader_reclaimable = 5,
+
executable_and_modules = 6,
+
framebuffer = 7,
+
_,
+
};
+
+
pub const MemoryMapType = if (config.api_revision >= 2)
+
MemoryMapTypeV2
+
else
+
MemoryMapTypeV1;
+
+
pub const MemoryMapEntry = extern struct {
+
base: u64,
+
length: u64,
+
type: MemoryMapType,
+
};
+
+
pub const MemoryMapResponse = extern struct {
+
revision: u64,
+
entry_count: u64,
+
entries: LiminePtr(?[*]*MemoryMapEntry),
+
+
/// Helper function to retrieve a slice of the entries array.
+
/// This function will return null if the entry count is 0 or if
+
/// the entries pointer is null.
+
pub fn getEntries(self: @This()) []*MemoryMapEntry {
+
if (self.entry_count == 0 or self.entries == null) {
+
return &.{};
+
}
+
return self.entries.?[0..self.entry_count];
+
}
+
};
+
+
pub const MemoryMapRequest = extern struct {
+
id: [4]u64 = id(0x67cf3d9d378a806f, 0xe304acdfc50c3c62),
+
revision: u64 = 0,
+
response: LiminePtr(?*MemoryMapResponse) = init_pointer,
+
};
+
+
// Entry point
+
+
pub const EntryPoint = *const fn () callconv(.c) noreturn;
+
+
pub const EntryPointResponse = extern struct {
+
revision: u64,
+
};
+
+
pub const EntryPointRequest = extern struct {
+
id: [4]u64 = id(0x13d86c035a1cd3e1, 0x2b0caa89d8f3026a),
+
revision: u64 = 0,
+
response: LiminePtr(?*EntryPointResponse) = init_pointer,
+
entry: LiminePtr(EntryPoint),
+
};
+
+
// Executable file (formerly Kernel file)
+
+
pub const ExecutableFileFeature = if (config.api_revision >= 2) struct {
+
pub const ExecutableFileResponse = extern struct {
+
revision: u64,
+
executable_file: LiminePtr(*File),
+
};
+
+
pub const ExecutableFileRequest = extern struct {
+
id: [4]u64 = id(0xad97e90e83f1ed67, 0x31eb5d1c5ff23b69),
+
revision: u64 = 0,
+
response: LiminePtr(?*ExecutableFileResponse) = init_pointer,
+
};
+
} else KernelFileFeature;
+
+
const KernelFileFeature = struct {
+
pub const KernelFileResponse = extern struct {
+
revision: u64,
+
kernel_file: LiminePtr(*File),
+
};
+
+
pub const KernelFileRequest = extern struct {
+
id: [4]u64 = id(0xad97e90e83f1ed67, 0x31eb5d1c5ff23b69),
+
revision: u64 = 0,
+
response: LiminePtr(?*KernelFileResponse) = init_pointer,
+
};
+
};
+
+
// Module
+
+
pub const InternalModuleFlag = packed struct(u64) {
+
required: bool,
+
compressed: bool,
+
reserved: u62 = 0,
+
};
+
+
const InternalModuleV1 = extern struct {
+
path: LiminePtr([*:0]const u8),
+
cmdline: LiminePtr([*:0]const u8),
+
flags: InternalModuleFlag,
+
};
+
+
const InternalModuleV2 = extern struct {
+
path: LiminePtr([*:0]const u8),
+
string: LiminePtr([*:0]const u8),
+
flags: InternalModuleFlag,
+
};
+
+
pub const InternalModule = if (config.api_revision >= 3)
+
InternalModuleV2
+
else
+
InternalModuleV1;
+
+
pub const ModuleResponse = extern struct {
+
revision: u64,
+
module_count: u64,
+
modules: LiminePtr(?[*]*File),
+
+
/// Helper function to retrieve a slice of the modules array.
+
/// This function will return null if the module count is 0 or if
+
/// the modules pointer is null.
+
pub fn getModules(self: @This()) []*File {
+
if (self.module_count == 0 or self.modules == null) {
+
return &.{};
+
}
+
return self.modules.?[0..self.module_count];
+
}
+
};
+
+
pub const ModuleRequest = extern struct {
+
id: [4]u64 = id(0x3e7e279702be32af, 0xca1c4f3bd1280cee),
+
revision: u64 = 1,
+
response: LiminePtr(?*ModuleResponse) = init_pointer,
+
// Request revision 1
+
internal_module_count: u64 = 0,
+
internal_modules: LiminePtr(?[*]const *const InternalModule) =
+
if (config.no_pointers) 0 else null,
+
};
+
+
// RSDP
+
+
const RsdpResponseV1 = extern struct {
+
revision: u64,
+
address: LiminePtr(*anyopaque),
+
};
+
+
const RsdpResponseV2 = extern struct {
+
revision: u64,
+
address: u64,
+
};
+
+
/// The response to the RSDP request. If the base revision is 1 or higher,
+
/// the response will contain physical addresses to the RSDP, otherwise
+
/// the response will contain virtual addresses to the RSDP.
+
pub const RsdpResponse = if (config.api_revision >= 1)
+
RsdpResponseV2
+
else
+
RsdpResponseV1;
+
+
pub const RsdpRequest = extern struct {
+
id: [4]u64 = id(0xc5e77b6b397e7b43, 0x27637845accdcf3c),
+
revision: u64 = 0,
+
response: LiminePtr(?*RsdpResponse) = init_pointer,
+
};
+
+
// SMBIOS
+
+
const SmBiosResponseV1 = extern struct {
+
revision: u64,
+
entry_32: LiminePtr(?*anyopaque),
+
entry_64: LiminePtr(?*anyopaque),
+
};
+
+
const SmBiosResponseV2 = extern struct {
+
revision: u64,
+
entry_32: u64,
+
entry_64: u64,
+
};
+
+
/// The response to the SMBIOS request. If the base revision is 3 or higher,
+
/// the response will contain physical addresses to the SMBIOS entries, otherwise
+
/// the response will contain virtual addresses to the SMBIOS entries.
+
pub const SmBiosResponse = if (config.api_revision >= 1)
+
SmBiosResponseV2
+
else
+
SmBiosResponseV1;
+
+
pub const SmBiosRequest = extern struct {
+
id: [4]u64 = id(0x9e9046f11e095391, 0xaa4a520fefbde5ee),
+
revision: u64 = 0,
+
response: LiminePtr(?*SmBiosResponse) = init_pointer,
+
};
+
+
// EFI system table
+
+
///
+
const EfiSystemTableResponseV1 = extern struct {
+
revision: u64,
+
address: LiminePtr(?*std.os.uefi.tables.SystemTable),
+
};
+
+
const EfiSystemTableResponseV2 = extern struct {
+
revision: u64,
+
address: u64,
+
};
+
+
/// The response to the EFI system table request. If the base revision is 3
+
/// or higher, the response will contain a physical address to the system table,
+
/// otherwise the response will contain a virtual address to the system table.
+
pub const EfiSystemTableResponse = if (config.api_revision >= 1)
+
EfiSystemTableResponseV2
+
else
+
EfiSystemTableResponseV1;
+
+
pub const EfiSystemTableRequest = extern struct {
+
id: [4]u64 = id(0x5ceba5163eaaf6d6, 0x0a6981610cf65fcc),
+
revision: u64 = 0,
+
response: LiminePtr(?*EfiSystemTableResponse) = init_pointer,
+
};
+
+
// EFI memory map
+
+
pub const EfiMemoryMapResponse = extern struct {
+
revision: u64,
+
memmap: LiminePtr(*anyopaque),
+
memmap_size: u64,
+
desc_size: u64,
+
desc_version: u64,
+
};
+
+
pub const EfiMemoryMapRequest = extern struct {
+
id: [4]u64 = id(0x7df62a431d6872d5, 0xa4fcdfb3e57306c8),
+
revision: u64 = 0,
+
response: LiminePtr(?*EfiMemoryMapResponse) = init_pointer,
+
};
+
+
// Date at boot (formerly Boot time)
+
+
pub const DateAtBootFeature = if (config.api_revision >= 3) struct {
+
pub const DateAtBootResponse = extern struct {
+
revision: u64,
+
timestamp: i64,
+
};
+
+
pub const DateAtBootRequest = extern struct {
+
id: [4]u64 = id(0x502746e184c088aa, 0xfbc5ec83e6327893),
+
revision: u64 = 0,
+
response: LiminePtr(?*DateAtBootResponse) = init_pointer,
+
};
+
} else BootTimeFeature;
+
+
const BootTimeFeature = struct {
+
pub const BootTimeResponse = extern struct {
+
revision: u64,
+
boot_time: i64,
+
};
+
+
pub const BootTimeRequest = extern struct {
+
id: [4]u64 = id(0x502746e184c088aa, 0xfbc5ec83e6327893),
+
revision: u64 = 0,
+
response: LiminePtr(?*BootTimeResponse) = init_pointer,
+
};
+
};
+
+
// Executable address (formerly Kernel address)
+
+
const ExecutableAddressFeature = if (config.api_revision >= 2) struct {
+
pub const ExecutableAddressResponse = extern struct {
+
revision: u64,
+
physical_base: u64,
+
virtual_base: u64,
+
};
+
+
pub const ExecutableAddressRequest = extern struct {
+
id: [4]u64 = id(0x71ba76863cc55f63, 0xb2644a48c516a487),
+
revision: u64 = 0,
+
response: LiminePtr(?*ExecutableAddressResponse) = init_pointer,
+
};
+
} else KernelAddressFeature;
+
+
const KernelAddressFeature = struct {
+
pub const KernelAddressResponse = extern struct {
+
revision: u64,
+
physical_base: u64,
+
virtual_base: u64,
+
};
+
+
pub const KernelAddressRequest = extern struct {
+
id: [4]u64 = id(0x71ba76863cc55f63, 0xb2644a48c516a487),
+
revision: u64 = 0,
+
response: LiminePtr(?*KernelAddressResponse) = init_pointer,
+
};
+
};
+
+
// Device Tree Blob
+
+
pub const DtbResponse = extern struct {
+
revision: u64,
+
dtb_ptr: LiminePtr(*anyopaque),
+
};
+
+
pub const DtbRequest = extern struct {
+
id: [4]u64 = id(0xb40ddb48fb54bac7, 0x545081493f81ffb7),
+
revision: u64 = 0,
+
response: LiminePtr(?*DtbResponse) = init_pointer,
+
};
+
+
// RISC-V Boot Hart ID
+
+
pub const RiscvBootHartIdResponse = extern struct {
+
revision: u64,
+
bsp_hartid: u64,
+
};
+
+
pub const RiscvBootHartIdRequest = extern struct {
+
id: [4]u64 = id(0x1369359f025525f9, 0x2ff2a56178391bb6),
+
revision: u64 = 0,
+
response: LiminePtr(?*RiscvBootHartIdResponse) = init_pointer,
+
};
+
+
comptime {
+
if (config.api_revision > 3) {
+
@compileError("Limine API revision must be 3 or lower");
+
}
+
+
std.testing.refAllDeclsRecursive(@This());
+
}
+2
components/ukernel/deps/spinlock/.gitignore
···
+
.zig-cache/
+
zig-out
+21
components/ukernel/deps/spinlock/LICENSE
···
+
MIT License
+
+
Copyright (c) 2024 Sreehari Sreedev
+
+
Permission is hereby granted, free of charge, to any person obtaining a copy
+
of this software and associated documentation files (the "Software"), to deal
+
in the Software without restriction, including without limitation the rights
+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+
copies of the Software, and to permit persons to whom the Software is
+
furnished to do so, subject to the following conditions:
+
+
The above copyright notice and this permission notice shall be included in all
+
copies or substantial portions of the Software.
+
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+
SOFTWARE.
+17
components/ukernel/deps/spinlock/README.md
···
+
# spinlock
+
A simple spinlock in zig, with the same API as [std.Thread.Mutex](https://ziglang.org/documentation/master/std/#std.Thread.Mutex).
+
Use this only if you NEED a spinlock, use a proper mutex with OS blessings otherwise.
+
+
## Usage
+
First, add the package to your build.zig.zon:
+
`zig fetch --save git+https://github.com/frostium-project/spinlock#dev`
+
Then, add the following to your build.zig:
+
```zig
+
const spinlock = b.dependency("spinlock", .{
+
.target = target,
+
.optimize = optimize,
+
});
+
exe.root_module.addImport("spinlock", spinlock.module("spinlock"));
+
```
+
+
Now, you can import the `spinlock` module.
+12
components/ukernel/deps/spinlock/build.zig
···
+
const std = @import("std");
+
+
pub fn build(b: *std.Build) void {
+
const target = b.standardTargetOptions(.{});
+
const optimize = b.standardOptimizeOption(.{});
+
_ = target;
+
_ = optimize;
+
+
_ = b.addModule("spinlock", .{
+
.root_source_file = b.path("spinlock.zig"),
+
});
+
}
+14
components/ukernel/deps/spinlock/build.zig.zon
···
+
.{
+
.name = .spinlock,
+
.fingerprint = 0x4a74a9d09adb1e3,
+
.version = "0.0.4",
+
.minimum_zig_version = "0.14.0",
+
+
.paths = .{
+
"build.zig",
+
"build.zig.zon",
+
"spinlock.zig",
+
"LICENSE",
+
"README.md"
+
},
+
}
+44
components/ukernel/deps/spinlock/spinlock.zig
···
+
const std = @import("std");
+
const testing = std.testing;
+
const builtin = @import("builtin");
+
const Thread = std.Thread;
+
+
pub const Spinlock = struct {
+
const Self = @This();
+
const State = enum(u8) { Unlocked = 0, Locked };
+
const AtomicState = std.atomic.Value(State);
+
+
value: AtomicState = AtomicState.init(.Unlocked),
+
+
pub fn lock(self: *Self) void {
+
while (true) {
+
switch (self.value.swap(.Locked, .acquire)) {
+
.Locked => {},
+
.Unlocked => break,
+
}
+
}
+
}
+
+
pub fn tryLock(self: *Self) bool {
+
return switch (self.value.swap(.Locked, .acquire)) {
+
.Locked => return false,
+
.Unlocked => return true,
+
};
+
}
+
+
pub fn unlock(self: *Self) void {
+
self.value.store(.Unlocked, .release);
+
}
+
};
+
+
test "basics" {
+
var lock: Spinlock = .{};
+
+
lock.lock();
+
try testing.expect(!lock.tryLock());
+
lock.unlock();
+
+
try testing.expect(lock.tryLock());
+
try testing.expect(!lock.tryLock());
+
lock.unlock();
+
}
+24
flake.lock
···
+
{
+
"nodes": {
+
"nixpkgs": {
+
"locked": {
+
"lastModified": 315532800,
+
"narHash": "sha256-1Ayx5AcA9t6riKWsuwLNI8x9SvLXKDOeBcfY4kZb0Zs=",
+
"rev": "aaff8c16d7fc04991cac6245bee1baa31f72b1e1",
+
"type": "tarball",
+
"url": "https://releases.nixos.org/nixpkgs/nixpkgs-25.11pre855444.aaff8c16d7fc/nixexprs.tar.xz?rev=aaff8c16d7fc04991cac6245bee1baa31f72b1e1"
+
},
+
"original": {
+
"type": "tarball",
+
"url": "https://channels.nixos.org/nixpkgs-unstable/nixexprs.tar.xz"
+
}
+
},
+
"root": {
+
"inputs": {
+
"nixpkgs": "nixpkgs"
+
}
+
}
+
},
+
"root": "root",
+
"version": 7
+
}
+24
flake.nix
···
+
{
+
inputs = {
+
nixpkgs.url = "https://channels.nixos.org/nixpkgs-unstable/nixexprs.tar.xz";
+
};
+
outputs =
+
{ nixpkgs, ... }@inputs:
+
let
+
inherit (inputs.nixpkgs) lib;
+
forAllSystems =
+
body: lib.genAttrs lib.systems.flakeExposed (system: body nixpkgs.legacyPackages.${system});
+
in
+
{
+
devShells = forAllSystems (pkgs: {
+
default = pkgs.mkShell {
+
packages = with pkgs; [
+
zig_0_15
+
qemu
+
];
+
};
+
});
+
+
formatter = forAllSystems (pkgs: pkgs.nixfmt-rfc-style);
+
};
+
}