Total 38 PostsDeveloperWhen the phrase "It's code made by AI" starts to be heard in an organization,
In a cloud project in 2012, a developer said. “Java 1.4 is still the best.” Even though the technology was officially unsupported, the remark strangely stuck in my memory. At the time, I thought it was simply a matter of technical preference. But as time passed, I realized that in the developer world, similar statements are repeated every few years, even as technologies change. At that time, it was a version released in 2002 and officially ended support in 2008. When I heard that, I didn't think it was just a matter of technical preference. Technology keeps changing, but people often fix their standard of technology to the point where they were most familiar with it. As time goes by, technology changes, but people's attitudes repeat in a remarkably similar way. I heard a similar story at a recent developer meetup. “Development is meaningless now. Only those who are good at using tools will survive in the future.” Hearing that, I recalled that scene from the past. It seems important now to be good at using certain tools. But tools keep changing. Versions go up, interfaces become easier, and new methods keep emerging. So, I suddenly had this thought. Isn't the role called "good at using tools" today also a role that will eventually be replaced amidst another change? Perhaps in a few years, someone will be saying this: “Opus 4.6 was the best. I was so good at writing However, there is one thing we must not miss here. Deciding what to build, why to build it, and where to go ultimately comes from human experience. Development is not simply about producing code; it is closer to making decisions about what choices to make. The problem arises when that role becomes increasingly blurred. When merely using tools starts to be considered the work itself, human judgment and responsibility are pushed aside at some point. And then, such statements naturally begin to emerge within the organization. “Why did this turn out this way?” “I don’t know.” “Because the AI made it.” As technology advances, the role of humans should not disappear but become clearer. Tools will keep changing, and versions will keep updating. But if the role of choosing and taking responsibility based on experience disappears along with them, perhaps only one thing will remain. The same words repeated every time a problem arises. “Because the AI made it.”2026.03.04
AIThe moment you can drive without knowing the engine, and AI development
Let's recall the time when cars first appeared. Back then, cars were not just a means of transportation as they are today. They were not easy to handle and broke down frequently. Understanding the mechanics to start the engine, control the fuel, and fix problems was necessary. It was an era when the roles of drivers and mechanics were much closer than they are now. Of course, not all drivers had a perfect understanding of the engine. However, cars were clearly closer to technology and not tools that the public could easily access. In 1908, the situation changed significantly when Henry Ford introduced the Model T and implemented the moving assembly line. Cars were standardized, and production was systematized. Users could drive without knowing the intricate details of the engine, and a structure was established where specialized mechanics handled repairs. A significant change occurred in this process. The roles of drivers and engineers began to separate. Technology became more complex, but the user experience became simpler. What mattered now was not "how the engine works" but "where to go." Seeing software development through AI these days reminds me of this scene. Once, creating software required a deep understanding of languages and structures. Now, AI creates the basic structure, connects functionalities, and quickly implements the form. The barrier to technology is clearly lowering. However, there is one important point here. Even if AI generates code, the responsibility for judging the accuracy, security, scalability, and appropriateness of that code still remains with humans. Just because cars have become widespread doesn't mean drivers don't need to worry about brake safety at all. Similarly, even if AI writes code, the responsibility for the results does not disappear. Rather, the role of the developer seems to be shifting one step forward. From someone who implements everything directly, to someone who defines problems, designs structures, and verifies and takes responsibility for the results generated by AI. This role is not simply "someone who checks code," but rather closer to a role that makes technical judgments, manages risks, and guarantees the direction and quality of the system. As technology advances, the burden of implementation may decrease, but the weight of judgment and responsibility may increase. Perhaps we are standing at another turning point. An era where not everyone understands the engine, but it is still up to humans to decide where to go and who will be responsible for that choice.2026.02.27
Elon MuskElon Musk's 'Death of Coding': How Should We View It?
Not the end of implementation, but a change in the way of observation The claim that coding will soon be unnecessary Recently, Elon Musk said that AI will generate machine code directly without programming languages, and humans will no longer need to write code. This is the claim that the friction of implementation will disappear. An era where thoughts become execution. “Imagination-to-Software.” This direction itself is worth discussing. I see this not as an end, but as a stage of abstraction Technology has always been abstracted. High-level languages replaced assembly, and frameworks were built on top of them. ORM wrapped database access with objects. Similar stories emerged back then. “Now there’s no need to write SQL directly.” But in reality, it was different. SQL did not disappear SQL was created not to express “how to fetch,” but to express “what you want.” And as object-oriented development spread, developers started exchanging data as objects. However, when problems arose, they eventually went back to the database. ORM generated queries, but we had to read those queries again. We checked SQL to see performance, and directly queried to validate data. Abstraction has emerged, but observation has not disappeared. The same will happen even if AI generates code If AI generates machine code directly, we enter a world of one more layer of abstraction. But the following question remains: “Why does this system work this way?” Humans have devised ways to look inside before trusting the results. We created logs, we created debuggers, we created profilers. Systems generated by AI will be no exception. We will likely end up creating a new layer of observation to understand AI's output. The easier implementation becomes, the greater the responsibility for understanding AI accelerates execution. However, it also rapidly propagates ill-defined problems. What matters at this point is not who wrote the code, but who made the decisions. This is why I see this change as akin to the regression of webmasters. Execution is automated, but the role of understanding the whole becomes even more important. What disappears in the end is not code, but centrality I don't think coding will disappear entirely. However, I believe the center of gravity will shift. From typing ability to structural thinking. From implementation speed to problem definition ability. From partial skills to understanding the entire system. Technology will become more abstract. However, humans have always devised ways to look beneath it. I believe this time will be no different.2026.02.13
AI DevelopmentAfter AI started generating code, responsibility became more ambiguous.
Last week, an unusual incident occurred at a cryptocurrency exchange. In a situation where 2,000 KRW was supposed to be paid out as part of an event, 2,000 Bitcoin was mistakenly paid out. There could be several possibilities for the cause of this incident. It could be a problem in the implementation process, a mistake in the operational phase, or a simple omission in verification. The important point is that this article does not aim to confirm or definitively state the cause. However, one question naturally arose while observing this incident. If this payout logic was code automatically generated by AI, not by a human, how would we explain the responsibility for this incident? This is not a story intended to assume facts. Looking at the current development environment, it's a question that is quite close to reality. Although the difference between 2,000 KRW and 2,000 Bitcoin seems extreme, technically, such incidents occur under surprisingly simple conditions. Unit verification might have been omitted, there might have been no upper limit check, or event-specific logic might have been mixed with operational code. These types of problems have occurred repeatedly even without AI. They can easily happen in human-written code and during operational processes. In other words, this incident is not an “accident caused by AI,” but rather a type of accident that can happen to anyone as systems become more complex. Let's think one step further from here. What if an AI agent drafted this logic, If a person had quickly reviewed it and decided, "This is good enough," and then deployed it, the landscape after the incident would be slightly different from what it is now. Who is the entity that created this logic? Who is responsible for not verifying the units or scope? Does the person who pressed the approval button bear all the responsibility? These questions are not easily resolved. Because while the outcome is clear, the decision-making process leading to that outcome becomes blurred. This is the point of discomfort often felt in AI-based automation environments. The execution clearly happened, the code remains, and the logs exist. But why this unit was chosen, up to what point it was deemed safe, and what assumptions were made to allow this logic are difficult to explain. So, when an incident occurs, organizations often say: "The system operated that way." This is less about evading responsibility and more a statement stemming from a structure that did not preserve the flow of decisions from the outset. To reiterate, there is no need to blame AI for this cryptocurrency exchange incident. However, one thing is clear. If such incidents occur even in environments where AI is not involved, then in environments where AI makes more and more decisions on our behalf, the possibility of these incidents occurring faster and on a larger scale also increases. Automation can reduce errors, but it can also amplify the impact when errors occur. Especially in areas involving money, authority, and operational logic, a very small judgment can immediately lead to risks for the entire organization. These questions are likely to arise more frequently in the future. Was this decision made by a human, or by a system? How are responsibilities delineated? Teams that cannot answer these questions may become more anxious as they introduce automation. Conversely, there will be teams that become stronger as automation increases. Teams that can explain the judgments made, even if the results were produced by AI. Teams where the boundaries between human and system judgment are structurally organized. Teams that can distinguish responsibilities and areas for improvement when an incident occurs. The difference between 2,000 won and 2,000 bitcoin is not simply a matter of numerical error. This incident poses these questions to us. In an automated execution environment, what are we leaving behind? In an era where AI can create code, the role of humans is gradually changing. Not as someone who implements directly, but as **someone who can document how decisions were made**. Regardless of the cause of this incident, it is closer to a scene that foreshadows the possibility of incidents that must be prepared for in the AI era.2026.02.09
AIDevelopment created by AI is close to a model house.
It has become so natural to start development with AI these days. When you explain a feature in words, a screen appears, APIs are attached, and a service form is created. The speed is truly overwhelming. However, there's something I often think about while watching this. Many of the results we are creating with AI right now are like model homes. You can see a model home, but you can't live in it When you visit a model home, everything looks perfect. The lighting seems good, the layout is well-designed, and the furniture is beautifully arranged. You can even walk through it and check the view from the window, and even the scent is curated. But there are things you can never know in a model home. How is the morning commute traffic? How much noise from the adjacent road comes in at night? Does the structure allow sewer smells to rise on rainy days? Does it get breezy in the summer? How do people in this neighborhood actually live? And one more, the most important thing. Is this house actually safe. Are the pipes connected properly, can the electrical design handle actual usage, are there preparations for fire or flood situations. All of this is revealed only after living in it and operating it. AI makes model homes really well AI is currently: Create the screen Add functionality Quickly build the structure. The appearance is plausible. The space is also well-divided. If we use an analogy to architecture, AI can erect walls, install windows, and create the internal structure if it has the blueprints. It's truly excellent, up to the level of a model house. However, AI does not make decisions. Where that building will be placed, what the surrounding environment is like, how people will actually live, and above all, how to prevent risks. Development is exactly the same AI creates code. It creates CRUD, adds login, and configures screens. But AI doesn't know. Why this feature was created Why the structure changed due to a certain obstacle Where customers kept getting stuck What repeatedly failed during operation Where security incidents could occur These are all histories experienced by humans. Products change shape and are refined based on these experiences. Therefore, AI-based development often feels like this At first glance, it's perfect. However, its limitations quickly become apparent when it enters actual operation. No exception handling, Difficult to extend, The whole thing needs to be re-examined even for small changes, Poorly designed permissions, Sensitive data is exposed as is, Deployed without any security considerations It's like a house with a building, but no plumbing or electricity connected, and the doors don't lock. It's like having a model home, but the actual residence hasn't been built yet. This part is important AI can build buildings. It can also create model homes. But placing that building within actual life, and continuously adjusting it to fit the situation and environment is human judgment and experience. AI creates the structure. Humans create context and responsibility. AI creates a space that can be shown. People turn that space into a place where they can live safely. The Role of Developers Will Not Disappear I believe the role of developers will not diminish in the future. Rather, it will change. Now, developers will face the question of not "who can write code faster," but "who better understands reality and risks." Those who have experienced the field, gone through operations, and directly confronted user inconveniences and potential accidents will complete the product. Conclusion AI builds buildings quickly. And it creates model homes that look very impressive. However, it is people who make those buildings truly come alive. Code can be generated, but environments and risks cannot. Ultimately, the product is completed based on the decisions of humans who have experienced the field.2026.02.04
DeveloperIsn't "comprehension" more important for developers now?
I often have a thought when I see developers these days. Problem-solving is done very quickly with AI. But they are not interested in what happened before. Why was it made this way, What was the business context, They don't ask or try to listen to why the existing code is in this form. Even not only code written by others, I was quite surprised to see them applying code they created with AI without properly looking at it. In one of the projects I experienced, There were cases where very inefficient and illogical code was included due to business requests. That was not a result of the developer's skill, but of the situation and requirements at the time. It was code like a "field artifact" that accumulated history. But these days, the attempt to understand such context is disappearing. "Existing code is strange → Rewrite it with AI" This formula is repeated too easily. The problem is that the newly written code, The moment it breaks the existing business logic or operational flow comes more often than expected. AI helps with implementation. But it doesn't tell you why it became that way. What is becoming more important for developers now is not coding speed or Ability to read existing systems An attitude of trying to understand strange code "before fixing it" The ability to think and connect history with business context Aren't they the same thing? As for how to use AI, honestly, you can just ask the AI. However It doesn't substitute for the ability to understand existing content. Future developers Will it be more important to "understand well" rather than "how quickly you can code"?2026.01.31
ThawingThe Real Reason a Developer Missed a Deadline: Frozen Pipes
It was a certain day. One of the developers on our team suddenly started acting strangely. He was usually quiet but always got things done precisely, but then, Schedules kept getting pushed back Commits were sparse He spoke even less His expression in meetings was vacant At first, I had the usual suspicions. “Is he stuck technically?” “Is he having trouble concentrating lately?” “Could it be burnout…?” So, I made the typical manager move. “If you’re stuck on anything, let me know.” But the answer I got was completely unexpected. “Sir… my home’s water pipes froze.” …What? “I couldn’t shower… couldn’t even use the restroom properly… I was completely out of my mind.” At that moment, I realized. This person wasn’t stuck on code, he was stuck in life. What we often mistake When a team member is struggling, we usually interpret it like this: Is it a lack of skill? Is it an attitude problem? Has their sense of responsibility decreased? But sometimes, reality is truly simple. The reason a developer can’t meet a deadline might not be the difficulty of the algorithm, but the difficulty of the restroom. Conclusion: The problem wasn’t the code, but the person From that day on, I became convinced. When schedules are delayed, instead of asking “Why couldn’t they do this?” we should first ask, **“Are you okay lately?”** Development is done with the head, but the head ultimately operates on top of life. If the water pipes burst, the sprint bursts too.2026.01.23
AI SiloThe real debt that arises when AI develops too quickly
2026.01.19PlanFocusing only on the plan resulted in an unwanted product that was just tailored to the schedule.
So What Should We Do Now: How to Switch to “Execution Management” When I worked as a development manager at a startup, the thing I repeated the most was “making plans”. Business schedules came first, development plans were made to fit those schedules, and from then on, it was a structure of struggling every day to reach the goals. But at some point, something strange starts to happen. The team stops building the product and starts becoming optimized for making excuses to keep the plan. “Documents explaining why the schedule was delayed” “Post-mortem reports saying it couldn't be helped due to too many risks” “Re-planning the plan to make up for it next week” In the end, the result is similar. The schedule was met, but a product no one wants comes out. The Problem Is Not the ‘Plan’, But the ‘Absence of Intent’ Plans are necessary. But plans cannot replace execution. Problems that occur during execution cannot all be predicted at the planning stage. But the bigger problem lies elsewhere. When you start running solely based on the schedule, the team's interest shifts from “What should we build” to **“By when must we finish”**. From this moment, the development team and the business team stop running towards the same goal and start colliding with different orientations. Business wants “what the customer needs” Development wants “to meet the schedule” Neither side is wrong. It's just that the standards they look at have changed. And in teams where only the schedule remains, misunderstandings eventually pile up. “The dev team has no business understanding” “The business team just doesn't know development” Actually, both are only partially true, but the core is this. We only agreed on the schedule, we never agreed on the intent. The Development Team Cannot Make a Product That Satisfies the Market “Alone” One thing must be made clear here. It is difficult for the development team to read the market, analyze customer needs, and derive the “correct product” on their own. And the moment you shift that responsibility to the dev team, they choose one of two things. Become a team that doesn't listen (Overturning product planning) Become a team that just does what they are told without thinking (Only meeting the schedule) Both fail. What's important in a startup is not “Let's make the dev team build a product that satisfies the market,” but for the dev team to understand the business intent and become a team that turns that intent into the best implementation. Business takes responsibility for the market, development takes responsibility for implementation, but **a structure where the intent is aligned as one** is needed. So What Should We Do: 5 Steps of Execution Management (Intent-Centric Version) 1) Treat Plans as ‘Hypotheses’, Not ‘Contracts’ Many teams run plans like final versions. But a plan is a hypothesis. A plan is an estimate that “if we do this, it will work” Execution is the process of proving “actually doing it showed it was right/wrong” The moment you operate a plan like a contract, the team starts making distorted decisions to meet the plan. ✅ Practical Application Separate **“Fixed Schedule” and “Variable Scope”** on the timeline Keep the milestones, but leave the implementation scope adjustable 2) Set Goals Based on ‘Intent’, Not ‘Schedule’ Goals in schedule-centric organizations are usually like this. Login feature complete Dashboard 1st draft complete Payment API integration complete This is a task list, not a purpose. Intent-centric goals look like this. “Make sure signup conversion doesn't break” “Make the user move immediately to the next action” “Do not block the flow where payment occurs” In other words, the goal becomes not the feature, but implementing the flow the business wants. ✅ Practical Application Change the sprint goal sentence like this ❌ “Develop Feature A” ✅ “Implement Business Intent A, and verify success criteria” Here, the success criteria doesn't have to be the market, but **the business intent criteria (conversion, flow, data, operational feasibility).** 3) Don't Find Risks in Meetings, Reveal Them Quickly in Execution Execution management is ultimately a battle against risk. The moment you look for risks in a meeting, risk becomes an argument, and the schedule becomes a defense. Risks are only seen when you get your hands dirty. ✅ Practical Application: The 48-Hour Rule If there is a “vaguely difficult” task, do the work to just verify the risk within 48 hours first. API Integration → Try connecting just minimum call/auth Performance Worries → Check bottlenecks with dummy data Screen Ambiguity → Check reaction with prototype Doing this becomes grounds for decision-making, not an “excuse”. 4) Manage “Blocked Points and Decisions”, Not “% Progress” The phrase most used by plan-centric teams: “We are currently 70% done.” The reason this phrase is dangerous is that hell might exist in the remaining 30%. Execution-centric management looks at it like this. Where is the blocked point right now? What decision is needed? Who needs to decide and when? ✅ Practical Application: Daily 10-Minute Checklist What is the one thing blocked today? What decision is needed to break through it? Who is the person to decide today? Sharing just these three drastically reduces misunderstandings between business and dev teams. 5) Keep the Schedule, But Do Not Keep It While “Damaging the Intent” If you rush the schedule, abandon the intent, and just fill in the features, the result is this. An unwanted product fitted to the schedule What's more fatal in a startup is not a product coming out late, but a product that came out fast but is different from the business intent. ✅ Practical Application: Priorities When the Schedule Collapses Scope Adjustment (Maintain Intent) Implementation Simplification (Maintain Intent) Launch Schedule Change In other words, the schedule is adjustable, but the intent must not be broken. Conclusion: Not “Schedule”, But “Intent” Binds the Team Together If we focus on the plan, we can become a “team that keeps the schedule”. But if we focus on execution, we become a “team that implements business intent”. And for business and dev teams to move as one team in a startup, what is ultimately needed is this. Not the schedule, but Execution Management Aligned with Intent If you run only with the schedule, business and development create constant misunderstandings while having different goals. Conversely, if you share the intent, even if the schedule shakes a little, the confidence that you are looking at the same goal remains. That is what keeps the team from collapsing.2026.01.12
TPMHow can a team function without a TPM?
A New Solution Through BCTO and Aline.team When the topic of TPM (Technical Program Manager) arises in early-stage startups, the same question always follows: “Do we need a TPM?” And then, a realistic answer emerges: “We need one, but it’s difficult right now.” This article discusses why the TPM role exists and what changes when their responsibilities are broken down and implemented through systems instead of people. We will also explain where BCTO and Aline.team fit into this process. Why Was a TPM Needed? A TPM is not just a simple schedule manager. While a PM focuses on “What should we build?”, a TPM is responsible for the following questions: How is this work progressing? Who is becoming a bottleneck? How do technical decisions impact schedule and risks? How are the dependencies between multiple teams connected? As organizations grow and systems become more complex, these questions naturally converge on one person. That person is the TPM. This is also why TPMs are crucial in large corporations. A TPM is the person who connects technical and organizational complexity. But Why Is It Difficult for Early-Stage Startups? The challenge lies in the reality of early-stage startups. Experienced TPMs come with a high cost. The smaller the team, the more disproportionate the management burden becomes. Ultimately, many decisions rely on people’s experience and memory. Therefore, in most early-stage teams, the CTO or tech lead takes on the TPM role in addition to development. The problem is that this structure doesn't last long. Developers increasingly become coordinators, and while coordination increases, the basis for decisions becomes unclear. Breaking Down the Role from People to Systems Here lies a crucial shift in perspective: Instead of asking, “Should we hire a TPM?” ask, “What structure can handle the tasks a TPM performs?” If we divide the TPM's responsibilities, there are broadly two categories: Observing and explaining Facilitating coordination and decision-making Do these two areas necessarily require a person to handle them entirely? BCTO: Systematizing the TPM’s ‘Observation and Visibility’ BCTO automates the observation and reporting that TPMs do daily, using data. Based on actual Git commits, PRs, and issues, it shows “What is currently in progress?” and through developer work patterns and flows, it explains “Why did it slow down?” It transforms schedule delays or excessive optimism into evidence for proactive measures, not post-hoc analysis. The important point in this process is that it is about explanation, not evaluation. It doesn't show who did well or poorly, but rather what structures and flows led to certain outcomes. While a TPM would provide explanations through meetings and documents, BCTO offers continuously updated data. Aline.team: Supplementing the TPM’s ‘Coordination and Decision Support’ Aline.team goes a step further. The questions that TPMs often found most challenging are typically: Should a senior or junior developer handle this task? Is the current bottleneck technical or communication-related? Why does team speed vary with the same number of people? Aline.team learns individual developers' development patterns and strengths, analyzes task types and team situations together, and aims to answer questions like: “Who is the most rational choice to take on this task?” “Why was this schedule predicted this way?” “Where should intervention occur to speed up the overall flow?” This is less about making decisions for you and more about supporting decisions to make them explainable. So, Do BCTO and Aline.team Replace TPMs? To be precise, no. These two products are not tools designed to eliminate the TPM role. Instead, they enable the following: Teams to function without a dedicated TPM. Existing TPMs to focus on more critical decision-making. It creates a structure where systems handle 70-80% of the tasks a human TPM would perform, allowing people to concentrate on the remaining 20-30% of true judgment and responsibility. In One Sentence TPM is a role, and BCTO and Aline.team turn that role into a scalable system. What early-stage teams need is not perfect consensus or more coordination. A clear understanding of who makes decisions. The ability to explain why those decisions were made. A structure where accountability for outcomes is not blurred. When that structure is maintained, experience becomes a strength, and the team accelerates.2026.01.08