平沢進の亜種音TV / Susumu Hirasawa's Asyu-On TV(Ashu-On TV)(Sound Sub-Species TV), April 25, 2026 "非コード人のアコード + タオの3つのエコー/Accord of the Uncoded + Three Echoes of the Dao" 編/Edition
【Translation in English】
(The title screen displays the text "Hirasawa Susumu's Asyu-On TV/Sound Subspecies TV: Accord of the Uncoded + Three Echoes of the Dao")
[00:22]
Hirasawa Susumu: Hello everyone. This is Hirasawa Susumu. Hirasawa Susumu's Back Space Pass has been renamed "Asyu-On TV(Sound Subspecies TV)" starting with this episode. So, first, for the benefit of our new fans, let me briefly explain what "Back Space Pass, now renamed 'Asyu-On TV'" is.
Asyu-On TV is a streaming broadcast that airs after any major event I've participated in, such as a recording session, a live performance, or a concert tour.
The broadcast includes various anecdotes and thoughts related to the event, and, if time permits, answers to questions.
It's basically a live broadcast, but it's saved as an on-demand video afterward, so you can watch it anytime.
Don't worry if you missed it.
[01:31]
So, today I'd like to begin the talk show titled "Asyu-On TV 'Accord of the Uncoded' + 'Three Echoes of the Dao Tokyo' Edition."
[01:41]
However, before that, there's something I'd like to explain. Actually, this broadcast isn't live; it's a pre-recorded program.
This is because I've had two dental implant surgeries this month, and there's a possibility that my pronunciation might change or that some part of my face might swell and look unsightly. Because it's difficult to predict, I've decided to broadcast a recording made before all the surgeries. I ask for your understanding.
[02:26]
Although it's a recording, it's basically unedited to give it a live feel, so there may be some parts that are difficult to see or hear. Again, please forgive me.
.
[02:41]
Let's begin with "Accord of the Uncoded."
"Accord of the Uncoded" was a live concert held in Osaka and Tokyo from December 2025 to January 2026, spanning across the New Year, and was held in conjunction with the release of Kaku P-MODEL's new album, "unZIP."
[03:09]
As usual, Ejin(TAZZ and SSHO) were there as support members for the live performances. Ejin usually participate as support members for Hirasawa's solo shows or the shows in the Hybrid Phonon format, but from this point on, Ejin will also be participating as support members for Kaku P-MODEL's live performances.
.
※[Note: Susumu Hirasawa says that this is the first time(2025-2026) Ejin have performed as support members for Kaku P-Model, but this is a misremembering on his part. In reality, Ejin have been supporting Kaku P-Model since the "Kai = Kai" concert in 2018.]
.
[03:34]
There's a reason for that. It's predicted that with unZIP, Kaku P-MODEL may end its role as the P-MODEL brand. From now on, the Kaku P-MODEL brand will move closer to solo work, and there's a possibility that it will eventually merge. As an intermediate step, the Ejin members are gradually starting to participate in the performance.
[04:11]
And I'd like to talk a little about the song selection for this live performance. I've heard complaints like, "Even though this is a Kaku P-MODEL live performance, there aren't many Kaku P-MODEL songs?!" You're absolutely right, and there's actually a reason for that.
It's related to the content of Kaku P-MODEL's latest album, "unZIP." In "unZIP," it represents the idea that people, who were imprisoned in "Room" like in the P-Model's first album "In A Model Room", are released from their confinement, that is, uncompressed and unzipped, and liberated.
In other words, the album is positioned within a process where people are liberated from the social persona defined by others, not the persona defined by themselves, or from a persona they unwillingly believe to be themselves, and move towards their true selves.
[05:32]
Considering this, the nature of P-MODEL from their debut album to "Kaku P-MODEL" tends to be pessimistic. Therefore, the P-MODEL brand has historically focused on themes such as dystopia, cautionary expressions, depictions of how people are oppressed, and questioning who is responsible for such actions.
.
And with unZIP, the album is created from a perspective that, in a sense, crosses a turning point in human history toward liberation or detoxification. While unZIP has a positive and optimistic tone, the P-MODEL brand's earlier tone was dark, negative, critical, and rebellious, making it difficult to reconcile with the tone of unZIP.
.
[06:45]
In that case, the selection of songs for a single live performance had to be limited to songs from before unZIP, meaning that almost all of the songs from Kaku P-MODEL (before unZIP) were rejected. Therefore, in choosing songs that emphasized the concept, it was necessary to gather songs from P-MODEL, from their past debut album to the most recent Kaku P-MODEL album, that were closer to the unZIP perspective, or had a more positive viewpoint.
[07:26]
That's how I ended up with that selection of songs. In the process, it resulted in a list of songs that hadn't been performed in a very long time.
.
※[Note: The Concert "unZIP: Accord of the Uncoded" Setlist in Osaka:
1.Cyborg, 2.GRID, 3.Phase-0, 4.Delusion Railway, 5.Haldyn Dome, 6.Catastrophe by the Window, 7.Parthenon(featuring modular synth solo performance), 8.Julia Bird, 9.Veronica, 10.Phase-6, 11.Solid Air, 12.Bye Bye Halycon, 13.Zebra, 14.The beginning of Timeline, 15.Parallel Kozak, 16.Another Day, 17.Moiporia]
.
[07:42]
I'd like to talk a little about this. It's an interesting result that several songs were performed for the first time in over 30 years.
[07:57]
For example, GRID. I don't trust it much, but according to the AI's research, GRID was performed for the first time in 34 years.
[08:13]
AI often blatantly lies, so I have to be careful, but to save myself the trouble of researching it myself, I'll use the AI's research here, knowing that there's a possibility of lies, to examine other songs as well. According to that, Cluster was performed for the first time in 32 years, and Julia Bird for the first time in 35 years.
[08:44]
There are also other older songs like Another Day and Solid Air, which I've selected for my solo live performances, and in the past, there was a project called Kangen-shugi where I arranged P-MODEL songs in a solo style using strings, and within that project Another Day and others have also been featured, and if we ignore that, it seems this is the first time in 34 years that P-MODEL has performed this song.
[09:17]
It might sound like I'm boasting about old songs, but the truth is I have a large number of works. According to a survey I did over 20 years ago, I have over 300 songs with registered copyrights. Of course, this includes songs made for commercials and film scores, but if I had the opportunity to perform all of these songs just once a year, and then to perform them a second time... That's a grand story spanning 300 years, so saying it's been over 30 years is rather tame, wouldn't you say?
[10:07]
However, when I think about just how much I've worked, it sends shivers down my spine, but thanks to everyone's support, my imagination hasn't run dry yet, and I'm always able to focus on new works.
[10:35]
As for the selection of songs, it had the characteristics described up to this point.
[10:43]
And "The Accord of the Uncoded" features several new approaches in its live performances. One is the introduction of a new laser harp. Another is the introduction of a modular synthesizer and the guitar synthesizer solo parts, which I think were the highlight.
[11:08]
First, I'd like to talk about the new laser harp. This is a laser harp made by Maywa Denki, and Mr. Tosa(Novmichi Tosa), the president of Maywa Denki, personally named it "Kirin" (輝鈴[shining bell]).
.
.
[11:28]
My first meeting with Mr. Tosa(Novmichi Tosa) of Maywa Denki was during a dialogue for the magazine "FILTER," but actually, judging from the public's impression and perception, it might seem like we'd known each other much longer. However, during P-MODEL's concert tour when we released albums like "Fune," Maywa Denki left a message for P-MODEL in the dressing room of a local live house. While that could be considered our first contact, the "FILTER" dialogue was actually the first time we met and spoke in person.
[12:18]
The dialogue took place in a certain location where Maywa Denki's products were displayed. The atmosphere was incredibly strange and beautiful; surrounded by these beautiful, retro-futuristic products whose historical context is unclear, we were able to confirm our respective thoughts on "strange machines that play music."
[12:53]
During that conversation, the thought crossed my mind, "Perhaps I could ask Maywa Denki to make a laser harp for me?" However, I held back at the time and didn't say it out loud. Later, I tentatively sent them an email asking, "Would you be willing to make a new laser harp for me?" I explained, "I was captivated by the beautiful design and unique concept of Maywa Denki products, so I would really like you to create a new laser harp in that vein..." To my surprise, Mr. Tosa readily agreed, and after several meetings, a few months later, it was completed and named "Kirin" (輝鈴).
[13:54]
And this "The Accord of the Uncoded" was the first stage performance of the new laser harp, Kirin. At that time, I was still using the same method as with the previous laser harp, reusing the same performance data.
[14:19]
The Kirin laser harp actually has a feature that the previous generation laser harp didn't have. I'm planning to showcase this feature at Fuji Rock Festival in the summer, but let me give you a quick explanation. Simply put, the laser beam moves. With the current laser harp, blocking the beam triggers performance data and produces sound. In other words, blocking it sends a command. However, the Kirin has a function that allows the laser to move when performance data is input. So, the Kirin has the ability to move the laser beam, move the beam itself, and produce sound while outputting and inputting performance data.
[15:32]
This is a wonderful feature that can be used for some show-up and leads to new visual performances, so please look forward to it at Fuji Rock.
[15:44]
And that's enough about the laser harp for now. As another highlight, I used a modular synthesizer. Those of you who have seen "The Accord of the Uncoded" know, I am sure, that the scene of me operating a modular synthesizer was filmed and projected onto a large screen, so you could see what it looks like.
[16:19]
Normally, a synthesizer is a system where each function is internally wired, and you play a complete system. However, a modular synthesizer inherits the form of the early days of synthesizers, the form when they were still being experimentally created. In other words, each function module is simply lined up, and you create sounds or add variations to performances by externally wiring and connecting them.
[17:10]
In fact, in terms of technological progress, it should have already been phased out, but it still survives thanks to its form, or perhaps the feelings of admiration toward the man-machine interaction, and the strong supporters of that kind of thing.
[17:34]
That being said, synthesizers have already evolved considerably since then. They've progressed from physical, solid synthesizers to the kind of software synthesizers I use today—conceptual virtual synthesizers that require no wiring and take up no space. Just as there are people fascinated by vintage cars or those who find value in old furniture, modular synthesizers sometimes receive a similar kind of attention.
[18:24]
They possess a quality that goes beyond the sophistication of a musical instrument; they project a romantic, narrative quality, creating a more man-machine-like interaction. While they are very inconvenient tools, this is precisely why the sounds they produce have a certain tendency. They produce sounds with nuances distinct from those created by lining up and coordinating several modern synthesizers.
[19:09]
And some musicians aim to express the sonic characteristics or depth of the sound image that are possible partly because of the frustration of not being able to instantly change something due to its poor operability.
[19:29]
The reason I've recently adopted modular synthesizers is that, having already moved beyond physical synthesizers, stripping them down to the point where I'm using only conceptual software synthesizers, I recall a conversation with Kenji Konishi and Hajime Fukuma for "FILTER." Both Konishi and Fukuma are musicians who use modular synthesizers, and as I mentioned earlier, they create sound from a world where they are forced to engage in more man-machine interactions and adjust increasingly inconvenient machines. As we talked, I suddenly realized, "That's right, I was like that too!" Suddenly, I felt an urge to reintroduce modular synthesizers and, if possible, perform a live show with Konishi, Fukuma, and myself.
[20:44]
After the discussion, the three of us talked about various things, and we agreed that I would introduce a modular synthesizer, and we should think about a project together. Unfortunately, as you know, Hajime Fukuma passed away, so that never materialized.
[21:07]
However, I haven't given up hope or the possibility of doing something with Kenji Konishi.
[21:19]
The modular synth I use is a Black System from the brand Erica Synths. Normally, with modular synths, the fun lies in customizing each module to your liking, for example, "this module for this function is German-made, this module for this function is Russian-made, this one is Japanese-made, this one is California-made, or even from the opposite coast (East Coast)." However, I didn't have the time to research, select, and build my own system, so I purchased a pre-assembled Black System, which has a control system more suited to live performances.
[22:26]
While I was tinkering with it in my limited time, I decided, "Okay, let's try using it live," and that's what I did the other day.
[22:39]
And one more thing related to synthesizers: a guitar synthesizer solo part was performed. Following the modular synth solo part, I tried a guitar synthesizer solo as the next part. I've been using several types of guitar synthesizers since the days of the old P-MODEL. Guitar synthesizers are very difficult machines; of course, they present many challenges for developers and performers alike, making them very difficult instruments to handle. However, these problems have largely been resolved in modern times, and they have evolved to a level where they can be used practically without any issues.
[23:38]
I used to use guitar synthesizers as part of an ensemble-like nuance within a band sound. Or, to be more unconventional, I would do strange things like producing percussion sounds from a guitar.
[24:00]
However, this time, I decided to try a solo that has only become possible because of the evolution of guitar synthesizers, and the one I used was the latest guitar synthesizer, the Boss GM-800. However, as I mentioned earlier, there are several "difficult conditions." While guitar synthesizers contain sound sources, and it's possible to recall and play those sound sources, the process of re-editing and reconstructing the sound in one's own way is still somewhat difficult, challenging, and cumbersome. Therefore, I didn't play the sound source signals created by the GM-800 directly using the GM-800 itself.
[25:05]
I'm using a method where I transfer the signal that activates the synthesizer created by the GM-800 to a software synthesizer, and then use that software synthesizer as the sound source.
[24:25]
Earlier, I likened modular synthesizers to vintage machines, and the software synthesizer I played this time is also somewhat close to vintage. It's more of a software clone than a physical form, and it's the Korg Triton synthesizer. I've actually used a physical Triton in a recording before; it's featured on my album "Siren."
[26:10]
The Gm-800 allows you to change the parameters needed for each individual guitar string. This means you can set a different timbre for each string. Looking at demo performances of guitar synthesizers out there, most use the same sound—for example, a piano sound is set, and the synthesizer is used in a scenario where a guitarist is playing the piano or a saxophone. However, I utilized the ability to set various parameters for each string to create disparate, non-uniform sounds for each string. As a result, even though the phrases are fairly well-structured and played like a normal guitar, the different timbres of each string create a patchwork, pieced-together sound, resulting in a very strange nuance.
[27:41]
I tried playing like that, but this was just one experimental case for me. In the future, I'd like to refine this method a bit more, add some ideas or something, and create my own way of using the guitar synthesizer. That's the feeling I had as I finished the live performance.
[28:16]
(An AI video is shown with the caption "Let's take a break now," showing Susumu Hirasawa playing with three cats and a chimpanzee while drinking tropical juice on a tropical beach.)
[28:22]
Now, I'd like to take a five-minute break. I'll see you again in five minutes.
[28:31]
(The title screen displays the text "Susumu Hirasawa's Asyu-On TV: Accord of thee Uncoded + Three Echoes of the Dao")
[33:30]
Now, I'd like to talk about "Three Echoes of the Dao: Tokyo Edition." "Three Echoes of the Dao was originally created for the China tour, and since I've already held a Back Space Pass streaming show once since the China tour ended, I thought it would be good to talk about it from a different perspective here.
[33:56]
I'd like to talk a little about the song selection here as well. It was my first time performing in China, and in order to give first-time attendees a comprehensive understanding of what kind of person Hirasawa is and what he does, I selected songs that were like an "introduction to Hirasawa."
[34:20]
And I don't select songs in that way in Japan. However, being in a place I'd never been before, not knowing if I was truly welcomed or how well-known I was, and with the goal of creating an "overwhelming sense of being different," I chose a setlist that was like an "introduction to Hirasawa" to create an atmosphere that was overwhelmingly different from the other performers. Furthermore, because the setlist was constructed for festivals, it ended up being a collection of songs that were quite easy for the audience to get excited in the music festival, which was certainly intentional, but for me, it was an extremely tough performance.
[35:16]
Many of the songs required me to sing at full volume for extended periods, and with "Gardener King" and "Rotation" crammed together towards the end, it felt like a five-minute full-speed sprint for me. It was such a tough experience that I was relieved to have managed to keep singing through it.
[35:41]
It seems that the Japanese audience envied the song selection, which, to put it another way, was like an "introduction to Hirasawa," or in other words, a "Hirasawa hit song compilation," and although it's frustrating, that's how it ended up being. However, at least in Japan, where I perform live many times and am somewhat recognized on video sites and so on, my regular live performances tend to feature song selections that are more fragments of my long career rather than an introduction.
[36:21]
However, since it was my first performance in China, I thought you all would be interested in "what exactly did he do (in that country)?", so I decided to recreate it in Tokyo.
[36:35]
And there was one challenge in recreating it: the opening announcement is in Chinese, and it was necessary for everyone to know what was being said in Chinese. So, we displayed a Japanese translation of the spoken Chinese on the screen. The announcement in Chinese made there were from the Chinese script made by me using AI, and the voice is also provided by AI.
[37:17]
I predicted that it would probably sound like unnatural Chinese, and I did it that way. I actually expected that, and didn't ask anyone to check it. I skipped the step of having someone who understands Chinese check it. I wanted to incorporate that strange, probably intentionally strange feeling into the live broadcast. This is because I wanted to created an atmosphere that it is created by citizen as an amateur using internet-based communication means, and so, without experts and high-barrier infrastructure.
[38:01]
And in Tokyo, the Japanese translation was displayed on the right, and that Japanese itself was somewhat childish, which I think helped achieve the feel of internet communication (script made by amateur).
[38:20]
So, that's roughly where I intended to talk, but since I've already discussed ”Three Echoes of the Dao," I don't really have any other topics that I'm enthusiastic about, so from here on I'll answer some questions. I've picked up some questions from X regarding these two live performances, so I'll answer those.
[39:01]
Let's start with this one. It's a question about "Accord of the Uncoded."
.
(Question 1)
Q: "I noticed while watching the live stream today that the sticker on the computer next to your solo performance in Parthenon said 'Three Echoes of the Dao.' Does this have any deeper meaning?"
.
A: Regarding that question, there's no particularly deep meaning. It's the result of my laziness. First of all, the computer I was using at the time was the main computer I used during my China tour. And, regarding the lid, when using it with the lid open, the lid faces the audience, so I thought I'd like to add something related to the live performance as a design element there. I consulted with designer Mr. Nakai (Toshifumi Nakai), and he made a sticker for me, a large sticker of "Three Echoes of the Dao," which I stuck on. The tour ended, but I was too lazy to peel it off, so I just left it on. That's probably why it was shown in close-up on camera during my modular synth solo part. In other words, it had no meaning whatsoever; it was simply a question made out of my laziness. Um, and next.
[40:52]
(Question 2)
Q: "Is the lighting team for this live performance and 'Accord of the Uncoded' different from the lighting team for 'Hybrid Phonon 2566+'?"
.
A: That's the question. The lighting teams for 'Hybrid Phonon 2566+' and this 'Accord of the Uncoded' are the same. However, the lighting team for 'Hybrid Phonon 2566,' that is, the main performance before the additional show (Hybrid Phonon 2566+), was different. Lately, over a certain period, in my live performance team, there was a change in personnel in the sound and lighting teams. So, this lighting team is the same team that handled the lighting for "Hybrid Phonon 2566+", the China performance, "Accord of the Uncoded", and "Three Echoes of the Dao: Tokyo Edition," and they are a new lighting team. I would like to continue creating live shows with this team in the future.
[42:14]
(Question 3)
Q: "Did you use mind mapping in the production of (the album) 'unZIP' and (the concert) 'Accord of the Uncoded'?
.
A: Regarding that question, I did not use mind mapping. The reason is that I've been using AI more and more recently to think about various things. There are cases where it's just quicker that way. For example, when there's something vague in my mind that I can't put into words, or when there's a hazy image that's difficult to visualize clearly, using a mind mapping is appropriate. However, when you have a clear concept or specific details but they're unorganized, or when you want to expand on them, AI is suitable. You can use AI to clarify ambiguous things. It seems that when you're trying to organize your thoughts that's closer to verbalization than the method using mind mapping, using AI is convenient. That's what I've recently come to understand through experience.
.
Mind mapping is suitable for self-directed sessions in your own mind when you have vague images or clear visuals that haven't been verbalized, and you want to organize or create something. For things like "Accord of the Uncoded" or "unZIP", where the concepts were clear, I found that using AI was more appropriate than mind mapping, so I used AI this time. And next...
[44:37]
(Question 4)
Q: "In 'Accord of the Uncoded,' the record cover of 'In A Model Room' was prominently displayed at the beginning. What was the reason for deliberately avoiding performing it? Also, are there any plans to perform it in the future?"
.
A: This is a little difficult to understand, but are you asking, "Why didn't you perform any songs from the first album?" Or are you saying, "It's unacceptable that you showed it so clearly visually, yet omitted the first album?" As I explained at the beginning, in order to show what kind of message unZIP is, we used that kind of visual to show the people imprisoned in the first stage, ”In A Model Room”, and to show the process of their liberation. Therefore, it wasn't a preview of performing the first album, but rather used to move away from it. It's for exactly the same reason I explained how I was selecting the songs for "Accord of the Uncoded" at the beginning.
.
And I believe your question was, "Do you have any plans to perform older songs like those from your first album in the future?" I can't definitively say whether or not I have such plans. It's possible I might bring them out if necessary, but please understand that I won't use them for the purpose of "let's play 'Art Museum (the person I met)/Artmania' to liven up the venue." Next question.
[46:36]
(Question 5)
Q: "In 'Accord of the Uncoded,' Hirasawa's skillful manipulation of the Ejin was impressive. What kind of training did you undergo to acquire that skill?"
.
A: Well, training wasn't necessary. A monetary employment relationship was necessary. That's a joke, though. Just as a cat reacts to opening a can of cat food, the Ejin have a habit of bending their knees in response to the sound of certain joints on a human body. I simply utilized that. Is that satisfactory? Now, let's move on to the next question about "Three Echoes of the Dao."
[47:29]
(Question 6)
Q: "In 'Three Echoes of the Dao,' the video of foot stomping and hands clapping that was projected onto the screen during the intro to 'Dreaming Machine,' who was that?"
.
A: That's the question. First of all, the reason why that video was included in the Tokyo performance is that when we performed it in China, TAZZ and SHOO's foot stomping and hand clapping was only visible to people in the front row. So in Tokyo, we made it into a video to make it easier for the audience to recognize what they are doing and to connect it to their reactions, so we made a video of it and embedded it. And the person stomping their feet there is TAZZ. You might think that someone was hired as an extra for the filming, but actually, TAZZ was called into the studio, and the act of lifting his leg high and stomping it down was filmed and projected. Is that clear? And next.
[49:02]
(Question 7)
Q: "How do you decide on live performance titles such as 'The Accord of the Uncoded' and 'Three Echoes of the Dao'?"
.
A: That's the question. It's difficult to explain, but a title is an embodiment of a concept, so it's a process of combining words that represent it. So, how do I find those words? I use a method called abstraction and 360-degree perspective. This is a name I came up with myself, but for example, let me show you the process of how people being liberated from the Room in "In A Model Room" is transformed from a specific event like "Liberation from the Room" to the use of words like "unZIP" and "Accord of the Uncoded."
.
First, if we abstract the situation of a person trapped in a room to the next level, we get abstract expressions like "a person imprisoned in a restrictive space" or "a person whose actions are limited." Once we've abstracted it to that level, we ascend to that level and look around 360 degrees. When we look around 360 degrees, we try to find something in a lower level of abstraction that corresponds to the same concept, that is, a more concrete situation. In other words, we rephrase the physically restricted situation into more concrete, episodic words such as "compressed" or "enhanced."
.
When we descend to that level of concreteness, we can create titles that express the same concept, even though they are narrative-driven rephrasings like "unZIP" and "Accord of the Uncoded." I hope you understand. It's very complex and difficult to explain in words, but this abstraction and 360-degree perspective are techniques I actually use frequently when I post on X. Pay attention to what I'm saying. You might be able to find instances where you think, "This is probably a statement made after going through that process." And now for the last question. Let's make this the final one.
[52:10]
(Question 8)
Q: "You mentioned that in the BSP, Back Space Pass, from 'Three Echoes of the Dao,' you referenced AI for the laser harp gestures. Is that also the case for this live performance?"
.
A: That's your question, and the answer is yes and no. What I asked the AI for wasn't the movements for the laser harp, but simply how to move my body. There are cases where this is interspersed with the laser harp movements and the intermediate steps of the movements, so the boundary between what is for the harp and what is suggested by the AI is somewhat ambiguous.
.
As I've mentioned several times before, the laser harp is used to expand physicality in the handling of electronic devices that don't inherently require large body movements. Therefore, the laser harp setup is arranged to include how the body movements for playing the harp look, and sometimes it includes miming. It's created by deliberately inserting unnecessary movements as intermediate steps to create a continuous flow of motion.
.
These movements related to the harp and the movements suggested by the AI were being confused. That's a combination of the AI-suggested movements and the inevitable movements for the laser harp, that I showed on stage in "Three Echoes of the Dao."
[54:15]
And so, it's almost time to wrap things up. This was a pre-recorded presentation. The implant surgery is taking a while, but there are various reasons for that. It involves drilling holes in the bone and embedding metal.
.
So, there's a waiting period until it's firmly fixed, and various other things like that. Because of that, it's still going to take some time. There are a total of four implant surgeries, and two have already been completed. There are two more this month, and then the major surgeries will be finished. So far, there haven't been any issues with my face or speech, but just to be safe, considering what might happen, or if something might be visible when I open my mouth, I decided to record the broadcast.
.
And unless there are any unexpected major events, the next episode of this Asyu-On TV will be broadcast after Fuji Rock Festival. Until then, see you next time on Asyu-On TV. Goodbye.
[55:41]
[56:55]
.