Cracking Android SDE2/SDE3 Interviews in 2026: Deep Dives, Code, Follow-ups | Part - 3
Architecture & Patterns
31. MVI Reducer?
MVI Reducer (with Error Handling)
Reducer in MVI is a pure function that takes the current State and an Intent and returns a new State.
Key properties
- No side effects (no network, DB, coroutines)
- Deterministic and replayable
- 100% unit-testable
- State is immutable
State
@Immutable
data class OrderState(
val items: List<OrderItem> = emptyList(),
val loading: Boolean = false,
val error: String? = null
)Intents (User + Result)
sealed interface OrderIntent {
data object Load : OrderIntent
data object Retry : OrderIntent
data class AddItem(val item: OrderItem) : OrderIntent
data class LoadSuccess(val items: List<OrderItem>) : OrderIntent
data class LoadFailure(val message: String) : OrderIntent
}Reducer (Pure)
class OrderReducer {
fun reduce(state: OrderState, intent: OrderIntent): OrderState =
when (intent) {
OrderIntent.Load,
OrderIntent.Retry -> state.copy(
loading = true,
error = null
)
is OrderIntent.AddItem -> state.copy(
items = state.items + intent.item
)
is OrderIntent.LoadSuccess -> state.copy(
loading = false,
items = intent.items
)
is OrderIntent.LoadFailure -> state.copy(
loading = false,
error = intent.message
)
}
}ViewModel (Side Effects)
class OrderViewModel(
private val reducer: OrderReducer,
private val repo: OrderRepo
) : ViewModel() {
private val _state = MutableStateFlow(OrderState())
val state = _state.asStateFlow()
fun dispatch(intent: OrderIntent) {
_state.value = reducer.reduce(_state.value, intent)
if (intent is OrderIntent.Load || intent is OrderIntent.Retry) {
viewModelScope.launch {
repo.loadOrders()
.onSuccess { dispatch(OrderIntent.LoadSuccess(it)) }
.onFailure {
dispatch(OrderIntent.LoadFailure(it.message ?: "Error"))
}
}
}
}
}Interview Summary Line
In MVI, the reducer is a pure function that maps State + Intent to a new State.
All async work and error handling happen in the ViewModel, and results are fed back as intents, keeping the reducer deterministic and fully testable.
32. Feature modularization?
Feature modularization means splitting a large app into independent, vertical feature modules instead of horizontal layers.
Each feature owns its UI, logic, and dependencies.
Module Structure (Vertical Slices)
:core:common // utils, base classes, design system
:data:network // API, DB, repositories
:domain:usecase // business logic
:feature:cart // cart UI + feature logic
:feature:profile // profile UI + feature logicRules
- Features depend on
domainandcore - No feature-to-feature dependencies
datais not visible to UI directly
Gradle Setup
// settings.gradle.kts
include(":core:common")
include(":data:network")
include(":domain:usecase")
include(":feature:cart")Dynamic Feature Module (Play Feature Delivery)
// :feature:cart/build.gradle.kts
android {
dynamicFeatures = true
}
dependencies {
implementation(project(":domain:usecase"))
}Benefits (What Interviewers Care About)
- Faster builds (parallel compilation)
- Independent feature development
- Safer refactoring and ownership
- Dynamic delivery (on-demand install)
- Feature-level A/B testing and rollout
Vertical vs Horizontal Modularization
Horizontal (Layer-based ❌)
:ui
:data
:domainProblems
- Tight coupling across the app
- Small change triggers full rebuild
- Hard to assign feature ownership
- Poor scalability for large teams
Vertical (Feature-based ✅)
:feature:cart
:feature:profile
:feature:checkout
:core
:data
:domainAdvantages
- Each feature is self-contained
- Teams work independently
- Faster parallel builds
- Easier refactoring and deletion
- Supports Dynamic Feature Delivery
👉 Preferred approach for large Android apps
Recommended Module Structure
:core:common // UI components, utils
:data:network // API, DB implementations
:domain:usecase // business logic contracts
:feature:cart // cart feature (UI + logic)
:feature:profileDependency Graph Rules (Critical)
feature → domain → core
feature → core
data → domainStrict Rules
- ❌ Feature → Feature (forbidden)
- ❌ Domain → Data (forbidden)
- ❌ Core → Feature (forbidden)
- ✅ Feature depends only on Domain + Core
- ✅ Data implements Domain interfaces
This ensures acyclic dependencies and clean architecture.
Gradle Example
// settings.gradle.kts
include(":core:common")
include(":data:network")
include(":domain:usecase")
include(":feature:cart")
// :feature:cart/build.gradle.kts
android {
dynamicFeatures = true
}
dependencies {
implementation(project(":domain:usecase"))
implementation(project(":core:common"))
}Why This Scales (Interview Focus)
- Parallel Gradle builds (faster CI)
- Independent feature delivery
- Dynamic Play Feature support
- Feature-level A/B testing
- Clear ownership per team
Interview Summary Line
I modularize large apps using vertical feature slices. Each feature is isolated, depends only on core and domain, and can be delivered dynamically. This improves build speed, team scalability, and release safety.
For large apps, I use vertical feature modularization with strict dependency rules. Features are isolated, depend only on domain and core, and never on each other. This scales teams, speeds builds, and enables dynamic delivery.
33. Gradle build optimization?
Gradle Build Optimization (100+ Modules)
For very large Android projects, Gradle optimization focuses on avoiding unnecessary work, maximizing parallelism, and stabilizing configuration so builds scale locally and in CI.
Key Optimization Strategies
- Configuration Cache — skips project reconfiguration
- Parallel Execution — builds independent modules concurrently
- Build Cache — reuses task outputs (local + CI)
- Task Avoidance — configures only required tasks
- Version Catalogs — central dependency management
- Incremental Compilation — recompiles only changed sources
gradle.properties
org.gradle.parallel=true
org.gradle.configuration-cache=true
org.gradle.caching=true
kotlin.incremental=true
android.useAndroidX=true
android.enableJetifier=false
org.gradle.jvmargs=-Xmx4096m -XX:MaxMetaspaceSize=512mRoot build.gradle.kts
tasks.withType<org.jetbrains.kotlin.gradle.tasks.KotlinCompile>().configureEach {
kotlinOptions {
jvmTarget = "17"
}
}
subprojects {
apply(plugin = "com.gradle.enterprise")
}Dependency Management (Version Catalog)
# gradle/libs.versions.toml
[versions]
kotlin = "1.9.22"Common Configuration Cache Pitfalls
These are the most common reasons config cache breaks in large projects:
❌ Reading values at configuration time
val version = System.getenv("VERSION") // breaks cache✔ Fix: Use Providers API
val version = providers.environmentVariable("VERSION")❌ Using afterEvaluate {}
- Prevents configuration cache
- Indicates improper lazy configuration
✔ Fix: Use configureEach {} and lazy APIs
❌ Accessing project state in task actions
doLast {
println(project.name) // unsafe
}✔ Fix: Pass values as task inputs
❌ Non-cacheable custom tasks
- Missing
@Input,@Output - Writing to random files
✔ Fix: Declare inputs/outputs explicitly
How to Measure Gradle Performance Correctly
1️⃣ Use Build Scans (Recommended)
./gradlew assembleDebug --scanGives:
- Configuration time
- Task execution time
- Cache hits/misses
- Parallelism usage
2️⃣ Compare Clean vs Incremental Builds
./gradlew clean assembleDebug
./gradlew assembleDebugKey metric: incremental build time, not clean builds.
3️⃣ Enable Profiling
./gradlew assembleDebug --profileGenerates:
- HTML report with task timing
- Identifies slow plugins/tasks
4️⃣ CI Measurement (Critical)
- Track:
- Configuration time
- Cache hit ratio
- Wall-clock build time
- Fail PRs that regress build time
Why This Works (Interview Focus)
- Configuration cache removes repeated setup cost
- Parallel workers fully utilize CPU cores
- Build cache speeds up CI dramatically
- Version catalogs simplify dependency upgrades
- Measured builds prevent silent regressions
Interview Summary Line
For 100+ modules, I optimize Gradle using configuration cache, parallel execution, and build caching, while avoiding common config-cache pitfalls. I rely on build scans and profiling to measure real performance and prevent regressions.
34. ArchUnit enforcement?
ArchUnit Enforcement (Automatic Architecture Rules)
ArchUnit enforces architectural constraints using JUnit tests.
These tests run automatically in CI and fail the build on violations, preventing architecture decay in large teams.
Core Rules We Enforce
- Domain layer is framework-free
- No cyclic dependencies
- Strict unidirectional layer flow
- Feature modules are isolated
Base ArchUnit Tests
class ArchitectureTest {
private val domain = JavaClasses.from("..domain")
private val presentation = JavaClasses.from("..presentation")
@Test
fun `domain layer is pure`() {
domain.should()
.notDependOn("android.*")
.andShould()
.notDependOn("androidx.*")
}
@Test
fun `no dependency cycles`() {
noCycles().check(presentation)
}
@Test
fun `layers are unidirectional`() {
slicedTest("domain") {
presentation.should().notDependOn(it)
}
}
}Feature-Module ArchUnit Rules
Feature modularization must be enforced explicitly.
Rule: No Feature → Feature Dependency
@Test
fun `features are isolated`() {
val features = JavaClasses.from("..feature")
features.should()
.onlyDependOnPackages(
"..feature..",
"..domain..",
"..core..",
"kotlin..",
"java.."
)
}✔ Ensures:
- No feature-to-feature coupling
- Features depend only on
domainandcore
Rule: UI Cannot Access Data Directly
@Test
fun `presentation does not depend on data`() {
JavaClasses.from("..presentation")
.should()
.notDependOn("..data..")
}✔ Forces use of domain interfaces only
Gradle Integration
Add ArchUnit Dependency
// build.gradle.kts (test module)
dependencies {
testImplementation("com.tngtech.archunit:archunit-junit5:1.2.1")
}Ensure Tests Run in CI
ArchUnit tests run as standard unit tests:
./gradlew testCI blocks merges if rules are violated.
Optional: Dedicated Architecture Test Module
:architecture-testBenefits:
- Runs fast
- No Android plugin needed
- Centralized rules for all modules
Why This Scales (Interview Focus)
- Architecture rules are executable, not tribal knowledge
- Prevents accidental coupling during fast development
- Eliminates manual architecture review overhead
- Works reliably with 100+ modules and teams
Interview Summary Line
I enforce architecture using ArchUnit tests integrated into Gradle and CI. Rules cover layer purity, dependency direction, and feature isolation, ensuring the architecture stays intact as teams and modules scale.
35. State hoisting Compose?
State Hoisting in Jetpack Compose (Best Practices)
State hoisting means moving state ownership up to the caller (or ViewModel) and passing state down with events up.
This keeps Composables stateless, reusable, and predictable.
Core Principles
- State is owned by the closest common parent or ViewModel
- UI receives state + callbacks, not mutable state
- Prefer @Stable / immutable state objects
- Scope recomposition to the smallest UI
Hoisted UI Composable (Stateless)
@Composable
fun CartScreen(
cartState: State<List<CartItem>>, // hoisted from VM
onAdd: (CartItem) -> Unit,
modifier: Modifier = Modifier
) {
LazyColumn(modifier) {
items(
items = cartState.value,
key = { it.id }
) { item ->
CartItemRow(item, onAdd)
}
}
}✔ No internal state
✔ Easy to preview and test
✔ Minimal recomposition
State Owner (Hoister)
@Composable
fun CartFeature() {
val vm: CartVm = hiltViewModel()
val state by vm.cart.collectAsState()
CartScreen(
cartState = derivedStateOf { state },
onAdd = vm::addItem
)
}Stability Rules (Critical)
@Stable
data class CartItem(
val id: String,
val name: String
)- Stable data prevents cascading recompositions
- Lint enforces correct usage
- Works seamlessly with
LazyColumnkeys
What Interviewers Look For
- Stateless composables
- Single source of truth in ViewModel
- Controlled recomposition
- Stability annotations used correctly
Interview Summary Line
In Compose, I hoist state to the ViewModel or closest parent, keep composables stateless, and rely on stable data models. This minimizes recomposition, improves reuse, and keeps UI predictable.
36. Error boundaries?
Error Boundaries in Compose/MVVM
Error boundaries are a pattern for handling errors locally and globally in Compose and MVVM apps.
They isolate UI from errors and allow retry, logging, and graceful recovery.
Core Principles
- Model UI state with a sealed class:
sealed interface UiState<out T> {
object Loading : UiState<Nothing>
data class Success<T>(val data: T) : UiState<T>
data class Error(val message: String) : UiState<Nothing>
}- Errors are part of the state, not side effects
- UI reacts to state variants
- Retry actions are idempotent
Composable Error Boundary
@Composable
fun ErrorBoundary(
error: PaymentError?,
onRetry: () -> Unit,
content: @Composable () -> Unit
) {
if (error != null) {
Column(horizontalAlignment = Alignment.CenterHorizontally) {
Text("Payment failed: ${error.message}")
Spacer(Modifier.height(8.dp))
Button(onClick = onRetry) { Text("Retry") }
}
} else {
content()
}
}✔ Localizes error handling
✔ Displays retry UI
✔ Keeps main content composable clean
Using Error Boundary in Screen
@Composable
fun PaymentScreen(vm: PaymentVm = hiltViewModel()) {
val state by vm.state.collectAsState()
ErrorBoundary(error = state.error, onRetry = vm::retry) {
when (state) {
UiState.Loading -> CircularProgressIndicator()
is UiState.Success -> SuccessScreen((state as UiState.Success).data)
}
}
}Global Error Handling
- Use a CompositionLocal to provide a global error handler
- Integrate with crash reporting (e.g., Sentry)
val LocalErrorHandler = compositionLocalOf<ErrorHandler> { DefaultErrorHandler }
@Composable
fun AppTheme(content: @Composable () -> Unit) {
val errorHandler = LocalErrorHandler.current
CompositionLocalProvider(LocalErrorHandler provides errorHandler) {
content()
}
}- Global errors log breadcrumbs for analytics
- Local errors remain retryable
Best Practices
- Make retry actions idempotent
- Model all errors in sealed state
- Keep UI composables stateless
- Use stable data classes for state to avoid unnecessary recomposition
Why This Scales
- Local boundaries prevent a single failure from crashing the app
- Global boundaries capture unhandled errors for monitoring
- Works for multiple features (Payment, Profile, Checkout)
- Reduces crashes (Sentry integration caught ~90%)
- Ensures predictable, testable error flows
Interview Summary Line
I handle errors in Compose/MVVM using a combination of sealed UiState, local error boundaries for retryable errors, and global CompositionLocal error handlers integrated with crash reporting like Sentry. This ensures predictable UI, easy testing, and full crash observability.
37. Domain events?
Domain Events (Propagating Across Layers)
Domain events are fire-and-forget notifications emitted from the domain layer to other layers (ViewModel, analytics, logging) without creating tight coupling.
Key Principles
- Use SharedFlow for events:
replay = 0→ no old eventsextraBufferCapacity→ handle bursts without backpressure issues- Scope events to lifecycle-aware coroutines
- Tag metadata for tracking
- Avoid memory leaks by tying the flow to
viewModelScopeorlifecycleScope
Event Bus
class DomainEventBus {
private val _events = MutableSharedFlow<Event>(
replay = 0,
extraBufferCapacity = 100 // backpressure buffer
)
val events = _events.asSharedFlow()
suspend fun emit(event: Event) {
_events.emit(event)
}
}Domain Event Example
data class AnalyticsEvent(
val name: String,
val payload: Map<String, Any>
)
// Domain use case
class OrderUseCase(private val bus: DomainEventBus) {
suspend fun placeOrder(order: Order) {
// Business logic here...
bus.emit(AnalyticsEvent("order_placed", mapOf("id" to order.id)))
}
}ViewModel Observing Events
class OrderVm(private val bus: DomainEventBus, private val analytics: AnalyticsTracker) : ViewModel() {
init {
viewModelScope.launch {
bus.events
.filterIsInstance<AnalyticsEvent>()
.collect { analytics.track(it) }
}
}
fun placeOrder(order: Order) {
viewModelScope.launch {
OrderUseCase(bus).placeOrder(order)
}
}
}✔ No leaks: flow tied to viewModelScope
✔ Backpressure handled with extraBufferCapacity
✔ Alternative: ChannelFlow for ordered delivery
Best Practices
- Use sealed interfaces for domain events if multiple types exist
- Keep events lightweight
- Avoid exposing mutable flow to outside modules
- Ensure scope-bound collection to prevent memory leaks
- Log/track events in production with tags/metadata
Why This Scales
- Enables loose coupling between domain, VM, and analytics/logging
- Supports multiple subscribers
- Works reliably in large apps with 50–100+ modules
- Avoids cascade recompositions in UI while still notifying external systems
Interview Summary Line
I propagate domain events using a SharedFlow fire-and-forget bus, scoped to the ViewModel or lifecycle, ensuring loose coupling between domain, UI, and analytics. Events are lightweight, metadata-tagged, and safe from leaks, allowing scalable observability across large apps.
38. UseCase testable?
Testable UseCases (Clean Architecture)
UseCases should encapsulate business logic as pure, testable functions, depending only on ports/adapters.
This enables 100% unit testing without Android dependencies.
Core Principles
- Make UseCases stateless and pure
- Inject interfaces (ports) for external dependencies
- Avoid side effects in the domain layer
- Test all logic using fakes/mocks
- Ensure predictable, reproducible results
Example: CheckoutUseCase
class CheckoutUseCase @Inject constructor(
private val validatePort: ValidatePort,
private val paymentPort: PaymentPort
) {
suspend operator fun invoke(input: CheckoutInput): Result<CheckoutResult> {
return validatePort.validate(input.card).fold(
onFailure = { Result.failure(ValidationError(it)) },
onSuccess = {
paymentPort.charge(input.amount).map { CheckoutResult(it.id) }
}
)
}
}- Pure function of input → output
- Depends only on ports, not Android frameworks
- Can be safely reused in multiple layers (VM, domain, analytics)
Unit Test Example
class CheckoutUseCaseTest {
private val mockValidate = mockk<ValidatePort>()
private val mockPayment = mockk<PaymentPort>()
private val useCase = CheckoutUseCase(mockValidate, mockPayment)
@Test
fun `invalid card fails validation`() = runTest {
coEvery { mockValidate.validate(any()) } returns Result.failure("Invalid")
val input = CheckoutInput(card = "1234", amount = 100)
assertFailsWith<ValidationError> { useCase(input) }
}
@Test
fun `valid card charges payment`() = runTest {
coEvery { mockValidate.validate(any()) } returns Result.success(Unit)
coEvery { mockPayment.charge(100) } returns Result.success(PaymentId("xyz"))
val result = useCase(CheckoutInput(card = "valid", amount = 100))
assertEquals("xyz", result.getOrThrow().id)
}
}- Mocks/fakes isolate dependencies
- Can test all branches of logic
- Pure, framework-free → fast, reliable, reproducible tests
Best Practices
- Keep domain pure forever
- Use ports/adapters pattern for external systems
- Prefer coroutines + suspend functions for async
- Verify flows with Turbine or mocks (MockK/Mockito)
Why This Scales
- Ensures high test coverage without Android dependencies
- Easy refactoring and maintainable business rules
- Reusable across features and modules
- Safe for teams scaling 50+ modules
Interview Summary Line
I make UseCases pure and framework-free, inject ports/adapters for dependencies, and unit-test all scenarios using mocks/fakes. This ensures highly testable, maintainable, and reusable domain logic across the app.
39. KMP sharing?
Kotlin Multiplatform (KMP) Code Sharing
KMP allows sharing domain logic, UseCases, and even UI (Compose Multiplatform) across Android, iOS, and desktop targets.
Goal: maximize code reuse while keeping platform-specific integrations clean.
Core Principles
- Keep domain and business logic in
commonMain - Use expect/actual for platform-specific implementations (logging, networking, storage)
- Prefer pure Kotlin for shared modules
- Target multi-platform UI with Compose Multiplatform (2026-ready)
- CI runs multi-target builds to ensure compatibility
Shared Domain Example (commonMain)
// Platform-specific logger
expect class PlatformLogger {
fun log(msg: String)
}
// Shared UseCase
class SharedUseCase(private val logger: PlatformLogger) {
fun process(data: Data): Result<String> {
return try {
logger.log("Processing $data")
Result.success(data.processed)
} catch (e: Exception) {
Result.failure(e)
}
}
}- Pure business logic in commonMain
- Reusable across Android/iOS/desktop
- No platform dependencies here
Platform Implementations
Android (androidMain)
actual class PlatformLogger : PlatformLogger {
actual fun log(msg: String) = Log.d("Shared", msg)
}iOS (iosMain)
actual class PlatformLogger : PlatformLogger {
actual fun log(msg: String) = NSLog(msg)
}Repository Strategy
- 70% of the codebase is shared
- Platform-specific code is minimal
- CI runs multi-target builds (Android + iOS + JVM) to ensure correctness
Best Practices
- Keep UI-specific code separate (Compose Multiplatform UI optional)
- Use expect/actual sparingly for platform integration
- Prefer pure domain logic for testing and reuse
- Tag/annotate shared code clearly to avoid platform leaks
Why This Scales
- Reduces duplication across platforms
- Simplifies testing (single shared UseCase test suite)
- Ensures consistency in domain/business rules
- Enables teams to ship Android + iOS faster with shared CI pipelines
Interview Summary Line
I share domain and UseCases with KMP using commonMain modules and expect/actual for platform-specific integrations. This enables ~70% shared code, platform-independent testing, and multi-target CI builds while keeping UI and platform-specific code isolated.
40. Migration MVC→MVVM?
Incremental Migration: MVC → MVVM
Migrating a large legacy MVC app to MVVM without breaking production is best done using the Strangler Fig pattern.
- Introduce new MVVM screens alongside old MVC
- Toggle with feature flags for safe rollout
- Extract business logic to shared, testable UseCases
- Incrementally replace views and controllers
Core Principles
- Avoid rewriting the entire app at once
- Use shared UseCases to decouple logic from views
- MVVM receives state from UseCases via Flow / LiveData
- Monitor A/B metrics to validate changes
Example: Cart Feature Migration
Legacy MVC Activity
class CartMvcActivity : Activity() {
/* spaghetti code handling UI and business logic */
}Strangler Fig: Shared UseCase
class CartUseCase(private val repo: CartRepo) {
fun loadCart(): Flow<List<Item>> = repo.observeCart()
}New MVVM Fragment
class CartVm(private val useCase: CartUseCase) : ViewModel() {
val cart = useCase.loadCart().stateIn(viewModelScope, SharingStarted.Lazily, emptyList())
}
class CartMvvmFragment : Fragment() {
private val vm by viewModels<CartVm>()
override fun onViewCreated(view: View, savedInstanceState: Bundle?) {
lifecycleScope.launchWhenStarted {
vm.cart.collect { binding.cartList.submitList(it) }
}
}
}Feature Flag Toggle
if (featureFlag("mvvm_cart")) {
loadFragment(CartMvvmFragment::class)
} else {
loadActivity(CartMvcActivity::class)
}Best Practices
- Start with critical business logic in UseCases
- Keep old MVC intact until MVVM proves stable
- Ensure zero downtime in production
- Track user metrics to guide phased rollout
- Repeat pattern for other features, eventually phasing out MVC
Why This Scales
- Incremental migration avoids big-bang rewrites
- MVVM adoption is controlled and measurable
- Shared UseCases prevent duplicate business logic
- Works well in large apps with 50–100+ screens/modules
Interview Summary Line
I migrate legacy MVC to MVVM incrementally using the Strangler Fig pattern: extracting business logic into shared UseCases, introducing MVVM screens feature-flagged alongside MVC, and gradually rolling out with zero downtime and A/B metrics to guide the migration.
EmailId: vikasacsoni9211@gmail.com
LinkedIn: https://www.linkedin.com/in/vikas-soni-052013160/
Happy Learning ❤️
.png)
Comments
Post a Comment