Here will record some of my project requirements in enterprise development.

—————-For quick search and page turning, please see the table of contents on the right———————->

—————For quick search and page turning, please see the table of contents on the right———————–>

————–For quick search and page turning, please see the table of contents on the right————————>


Redisson Distributed Lock


Requirement Description

Recently, while working on my project, I found the need to implement caching locks for a menu query feature to optimize query performance. However, due to the subsequent implementation of a Redis cluster, multi-thread locks no longer meet the requirements (multi-thread locks can only lock the current JVM and cannot lock other JVMs in the cluster). Therefore, it became necessary to utilize distributed locks to achieve this performance optimization.

Idea and Implementation

The following is my original code used to implement multi-thread locks.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
public Map<String, List<Catalog2Vo>> getCtegoryJsonFromDbWithRedisLock() {
// distributed lock
String uuid = UUID.randomUUID().toString();
redisTemplate.opsForValue().setIfAbsent("lock", uuid, 100, TimeUnit.SECONDS);
if (StringUtils.isEmpty(redisTemplate.opsForValue().get("lock"))) {
// if lock is empty, wait and try again
try {
Thread.sleep(100);
} catch (InterruptedException e) {
e.printStackTrace();
}
return getCategoryJsonFromDbWithRedisLock();
}else {
Map<String, List<Catalog2Vo>> categoryJsonFromDB;
try{
categoryJsonFromDB = getCategoryJsonFromDB();
} finally {
// lua script, release lock
String script = "if redis.call('get', KEYS[1]) == ARGV[1] then return redis.call('del', KEYS[1]) else return 0 end";
Long lock = redisTemplate.execute(new DefaultRedisScript<>(script, Long.class), List.of("lock"), uuid);
}
return categoryJsonFromDB;
}
}

Here’s what specifically happens in the code:

  1. Generates a unique UUID as the value for the lock.
  2. Attempts to acquire the lock using Redis’s setIfAbsent method. If successful, returns true; otherwise, returns false. Here, a lock expiration time of 100 seconds is set.
  3. If unable to acquire the lock (i.e., get("lock") returns empty), it waits for 100 milliseconds before attempting to acquire the lock again, repeating this process until successful.
  4. Upon acquiring the lock, it executes database operations to retrieve category information.
  5. Finally, it releases the lock by executing a Lua script. This script first checks if the lock’s value matches the current UUID. If it does, the lock is deleted to release resources. This step ensures that only the thread or instance holding the lock can release it, avoiding mistakenly releasing locks held by others.

Now, this part of the code needs to be modified to use distributed locks. Writing distributed locks manually is cumbersome and error-prone, so I’ve chosen to use an existing mature solution: Redisson’s distributed locks for this modification.

1
2
3
4
5
6
7
8
9
10
11
12
public Map<String, List<Catalog2Vo>> getCategoryJsonFromDbWithRedissonLock() {
// distributed lock
RLock lock = redissonClient.getLock("CategoryJson-lock");
lock.lock(10, TimeUnit.SECONDS);
Map<String, List<Catalog2Vo>> categoryJsonFromDB;
try{
categoryJsonFromDB = getCategoryJsonFromDB();
} finally {
lock.unlock();
}
return categoryJsonFromDB;
}

After configuring Redisson, the getLock method can directly set a distributed lock, greatly simplifying the code and improving efficiency.

Redisson implements distributed locks by leveraging Redis’s atomic operations and publish/subscribe functionality to achieve reliable locking mechanisms in distributed environments.

  1. Based on Redis’s Atomic Operations: Redisson utilizes Redis’s atomic operations to implement distributed locks. The SETNX command in Redis sets a key to hold a string value if the key does not exist. This means only one client can successfully set the key to the locking state.
  2. Lock Timeout and Automatic Renewal: Redisson allows setting a timeout for locks. If a client fails to release the lock within the specified time, Redisson automatically releases it. Additionally, Redisson supports automatic renewal of lock timeouts to prevent lock expiration due to lengthy operations caused by business logic.
  3. Monitoring Lock Release: Redisson uses Redis’s publish/subscribe functionality to monitor lock releases. When a lock is released, Redisson sends a message to clients subscribed to the lock, allowing other clients waiting to acquire the lock to attempt to acquire it.
  4. Reentrancy: Redisson supports reentrant locks, meaning the same thread can acquire the same lock multiple times without causing a deadlock.
  5. Fairness: Redisson provides both fair and unfair lock modes. In fair lock mode, locks are allocated in the order of lock acquisition requests. In unfair lock mode, locks are immediately allocated to any thread in the waiting queue

Summarize

In summary, the experience has deepened my understanding of using auxiliary tools such as Redisson and the use of locks. While similar problems can be solved using Spring Cache, for more refined requirements, manually implementing locks with Redisson is more appropriate.


Table Cell Merging, Dynamic Columns and Excel Export


Requirement Description

I recently created a new dashboard: Personnel Traceability. This dashboard is designed to record the work status of employees in a specific position within a particular production line under a given project. For example, as illustrated in the diagram below, Project A may have two production lines: “Assembly” and “Production,” each associated with a specific job name. It’s important to note that a single position within a project’s production line may have multiple individuals working in it. In such cases, all individuals should be displayed, with their names separated by commas.

The dashboard also features dropdown search options for selecting the project and production line, as well as the ability to choose start and end dates. Additionally, it supports a fuzzy search for employee ID and name.

Furthermore, the table will dynamically expand vertically based on the selected date range. This means that the column headers, such as “2023/1/1 - 2023/1/5,” will be generated dynamically, while the columns for “Project,” “Line,” and “Job” remain fixed.

Finally, the report should also offer search, reset, and Excel export functionalities.

表格单元格合并与动态列

Question

This requirement includes several features, and the core challenges are as follows:

  • How to handle the display of dynamic columns?

  • How to merge cells?

  • How to implement Excel export?

Idea and Implementation

Backend and SQL

Before addressing these questions, it’s important to establish a clear backend data retrieval logic. Here, we will use the findDetail backend method to retrieve data from the server. The controller class for this method is as follows:

(The exportExcel method will be used for exporting to Excel, which will be discussed later.)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
@RequestMapping("/personnelTraceability")
@RestController
public class PersonnelTraceabilityController {
final PersonnelTraceabilityService personnelTraceabilityService;

public PersonnelTraceabilityController(PersonnelTraceabilityService personnelTraceabilityService){
this.personnelTraceabilityService = personnelTraceabilityService;
}

@PostMapping("/findDetail")
public WebResult<PageResult<Map<String,Object>>> findDetail(@RequestBody PersonnelTraceabilityDTO queryParam){
PageResult<Map<String,Object>> pageResult = personnelTraceabilityService.findDeatil(queryParam);
return WebResult.ok(pageResult);
}

@PostMapping("/exportExcel")
public void exportExcel(@Validated @RequestBody PersonnelTraceabilityDTO queryParams, HttpServletResponse response) throws IOException{
personnelTraceabilityService.exportExcel(queryParams, response);
}
}

You can see that the return type of the findDetail method is a PageResult<Map<String, Object>>. I won’t go into detail about the PageResult class, but in essence, it’s used for pagination, with page and limit properties. The key point here is the returned Map<String, Object>.

So, what data is received in this Map<String, Object>?

To answer this question, we need to analyze how the database interaction is performed, specifically the SQL query. Our data comes from a table called New_Base_Call_Names, and the primary columns of this table are as follows:

id project_Name Line_Name job_Name EmployeeId EmployeeName D_date
1 A Assembly PM 1259539 Jack 2023-08-09
2 A Production QCI 1659467 Julia 2023-08-10
3 B Production QCI 1659467 Julia 2023-08-12

This is a standard table structure that essentially stores all the information we need. The challenge arises from the need to display dates horizontally as column names rather than just storing them within the row data. Naturally, you might think of using the pivot function for this purpose.

However, Oracle SQL does support pivot for transposing data, but it requires knowing in advance which columns to pivot into rows, making it less suitable for dynamic column names. The PIVOT function’s syntax demands that you specify the columns to pivot in the query and explicitly list their names in the query.

So, we have to dynamically construct the SQL query within Java code to accomplish this task.

First, we create a DTO (Data Transfer Object) class to store the parameters sent from the frontend:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
/**
* @ClassName PersonnelTraceabilityDTO
* @Description TODO
* @Author lhj
* @DATE 2023/10/22 1:56
* @Version 1.0
*/
@Data
public class PersonnelTraceabilityDTO extends PageParam{

private String projectCode;
private String lineCode;
private String employeeId;
private String name;
private String beginDate; // use to generate dynamic column
private String endDate;// use to generate dynamic column

}

To accomplish this task in Java, it’s important to first determine what the dynamic columns will be. In this case, the dynamically generated columns should be dates. This means that the dates selected by the user in the frontend will be passed as parameters to the SQL query. Since users choose a start date and an end date, we need to retrieve all the dates between the start and end dates and store them in a List.

1
2
3
4
5
6
7
8
9
private List<String> getDateList(LocalDate startDate, LocalDate endDate) {
ArrayList<String> dateList = new ArrayList<>();
LocalDate currentDate = startDate;
while (!currentDate.isAfter(endDate)){
dateList.add(currentDate.format(DateTimeFormatter.ISO_DATE));
currentDate = currentDate.plusDays(1);
}
return dateList;
}

The dateList will be used in the code snippet below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
private String getSql(PersonnelTraceabilityDTO queryParam, List<String> dateList) {
String sql = SqlUtils.formatSql("select * from (select distinct " +
"project_Name as projectName" +
"Line_Name as lineName," +
"job_Name as jobName," +
"EmployeeId || '/' || EmployeeName as employeeInfo," +
"D_date," +
"from NEW_BASE_CALL_NAMES" +
"where employeeId is not null" +
(StringUtils.isNotEmpty(queryParam.getProjectCode()) ? " and project_code = '" + queryParam.getProjectCode() + "'" : "") +
(StringUtils.isNotEmpty(queryParam.getLineCode()) ? " and line_code = '" + queryParam.getLineCode() + "'" : "") +
(StringUtils.isNotEmpty(queryParam.getEmployeeId()) ? " and employeeId like '%" + queryParam.getProjectCode() + "%'" : "") +
(StringUtils.isNotEmpty(queryParam.getName()) ? " and name like '%" + queryParam.getName() + "%'" : "") +
"and d_date >= '" + queryParam.getBeginDate() + "' and d_date <= '" + queryParam.getEndDate() +
"' ) pivot (listagg(employeeInfo, ',') with group (order by employeeInfo) for d_date in " +
Tools.listToStr(dateList) + ")");
return sql;
}

In the SQL query construction shown above, ternary operators are used, such as StringUtils.isNotEmpty(queryParam.getProjectCode(), for checking the non-empty status of certain parameters. If a parameter is not empty, it is included in the WHERE clause, and if it’s empty, an empty string is concatenated in the SQL string. Additionally, it’s important to note that for generating dynamic column names, Oracle SQL requires single quotes around each transposed column name. For example, the pivot syntax is as follows:

1
2
3
SELECT *
FROM (SELECT Region, Month, Sales FROM sales)
PIVOT (SUM(Sales) FOR Month IN ('Jan' , 'Feb' , 'Mar' ));

Therefore, we use the Tools.listToStr method to process dynamic data, adding single quotes around each element in the incoming List<String>, and then joining them with commas. The specific method details are not provided here.

Additionally, since the requirement involves combining multiple employees’ names and employee IDs together and separating them by commas, we use the EmployeeId || '/' || EmployeeName as employeeInfo to concatenate these values into a single string. This string is then used with pivot (listagg(employeeInfo, ',') with group (order by employeeInfo) for d_date in to separate each employee.

In summary, the format sent to the frontend is determined after executing the SQL and applying pagination using the code snippet below:

1
2
3
4
5
6
7
8
9
10
11
12
@Override
public PageResult<Map<String, Object>> findDeatil(PersonnelTraceabilityDTO queryParam) {
LocalDate startDate = LocalDate.parse(queryParam.getBeginDate(), DateTimeFormatter.ISO_DATE);
LocalDate endDate = LocalDate.parse(queryParam.getEndDate(), DateTimeFormatter.ISO_DATE);

List<String> dateList = getDateList(startDate,endDate);

String sql = getSql(queryParam,dateList);
PageRequest request = DefaultPageRequest.of(queryParam.getPage(),queryParam.getLimit);
PageResult<Map<String, Object>> result = getSqlManager().execute(new SQLReady(sql), Map.class, request);
return result;
}

The format of the Map in this context is as follows:

key value
projectName A
lineName
jobName
‘2023-08-09’
‘2023-08-10’

The above Map is the data that will be sent from the backend to the frontend.

Dynamic Column Display

Now, the frontend receives the parameters, stores the paginated data in dataList, and stores the total data count in total:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
handleList(){
this.dataTable.loading = true;
personTraceApi.findDetail({
projectCode: this.queryParam.query.projectCode,
lineCode: this.queryParam.query.lineCode,
beginDate: this.queryParam.query.beginDate,
endDate: this.queryParam.query.endDate,
name: this.queryParam.query.name,
employeeId: this.queryParam.query.employeeId,
page: this.queryParam.query.page,
limit: this.queryParam.query.limit,
}).then((res) => {
this.dataTable = Object.assign(this.dataTable,{
dataList: res.data.list,
total: res.data.totalRow
});
});
}

Now, we can imagine that a List<Map<String, Object>> has been stored in this.dataTable.dataList. Next, let’s think about how to display such results in the frontend.

In fact, it’s quite simple. Element UI supports operations like this:

1
2
3
4
5
6
7
<el-table-column
v-for="columnn in dynamicColumns"
:key="columnn.index"
:prop="column"
:label="column"
width="100px"
></el-table-column>

By using v-for to iterate through an array, each element can be displayed as a column. Therefore, it’s enough to store the keys of each map from the backend data in the dynamicColumns variable to dynamically display them every time. However, there’s an even simpler approach. While the number of columns is dynamic, which columns need to be displayed can be relatively fixed. Users first select a date range, so it makes sense to use the selected date range as the range for dynamic columns:

1
2
3
4
5
6
7
8
9
10
11
12
13
generateDateRange(startdate,enddate){
const startdate = new Date(startdate);
const enddate = new Date(endDate);
const dateArray = [];
const currentDate = new Date(startdate);
// Loop through each day and convert it to a string.
while(currentDate <= enddate){
const formattedDate = this.formatDate(currentDate);
dateArray.push(formattedDate);
currentDate.setDate(currentDate.getDate() + 1);
}
return dateArray;
},

Following the above approach, if the user selects the date range from 2023-08-08 to 2023-08-10, then dateArray = [2023-08-08, 2023-08-09, 2023-08-10]. Placing this value in dynamicColumns should suffice, and the handleList() method can be expanded as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
handleList(){
this.dataTable.loading = true;
personTraceApi.findDetail({
projectCode: this.queryParam.query.projectCode,
lineCode: this.queryParam.query.lineCode,
beginDate: this.queryParam.query.beginDate,
endDate: this.queryParam.query.endDate,
name: this.queryParam.query.name,
employeeId: this.queryParam.query.employeeId,
page: this.queryParam.query.page,
limit: this.queryParam.query.limit,
}).then((res) => {
this.dataTable = Object.assign(this.dataTable,{
dataList: res.data.list,
total: res.data.totalRow
});
this.dynamicColumns = this.generateDateRange(
this.queryParam.query.beginDate,
this.queryParam.query.endDate,
);
});
}

Cell Merge

We’ve addressed the issue of dynamic columns. Now, let’s consider how to implement cell merging for row data in the table.

Firstly, Element UI does support cell merging, but we need to configure a merging function within :span-method as follows:

1
2
3
4
5
6
7
8
9
<el-table
:data="this .dataTable.dataList"
style="width: 100%"
:span-method="arraySpanMethod"
border
:header-cell-style="{'text-align':'center'}"
:v-loading="this.dataTable.loading"
>
</el-table>

Cell merged function as follow:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
// cell merged
arraySpanMethod({row,column,rowIndex,columnIndex}){
// determine whether current column is first column (project)
if (columnIndex == 0) {
const _row = this.spanArr[rowIndex]
const _col = _row > 0? 1: 0;
return{
rowspan: _row,
colspan: _col
};
}
// determine whether current column is second column (line)
if (columnIndex == 1) {
const _row2 = this.spanArr2[rowIndex]
const _col2 = _row2 > 0? 1: 0;
return{
rowspan: _row2,
colspan: _col2
};
}
},

The function accepts a parameter object containing information such as row (current row name), column (current column name), rowIndex (current row index), and columnIndex (current column index).

In the code, it first checks if columnIndex is 0. If it is, it proceeds with the logic for row merging.

If columnIndex is 0, it checks an array (or similar structure) named spanArr to determine the number of rows and columns that need to be merged for the current column in that row. It then returns this merging information as an object, including rowspan (number of rows to merge) and colspan (number of columns to merge).

Subsequently, the code similarly checks if columnIndex is 1 to determine if the second column (line) needs row and column merging. The logic for this part is similar to the first part.

The spanArr array is determined as follows (it’s essentially comparing data values, resulting in 1 if they are equal and 0 if they are not equal):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
// Obtain the array of cells that need to be merged.
getSpanArr(data){
for(var i = 0; i < data.length; i++){
if (i == 0){
this.spanArr = []
this.spanArr2 = []
this.spanArr.push(1)
this.spanArr2.push(1)
this.pos = 0
this.pos2 = 0
} else{
// Determine if the current element is equal to the previous element
if (data[i].projectName === data[i - 1].projectName){
this.spanArr[this.pos] += 1;
this.spanArr.push(0);
}else{
this.spanArr.push(1)
this.pos = i
}
if (data[i].lineName === data[i - 1].lineName){
this.spanArr2[this.pos2] += 1;
this.spanArr2.push(0);
}else{
this.spanArr2.push(1)
this.pos2 = i
}
}
}
},

A loop is used to iterate through the data array.

In the first iteration (with i equal to 0), spanArr and spanArr2 are initialized as arrays with a single element with a value of 1. pos and pos2 are initialized to 0.

For subsequent iterations (when i is greater than 0), it checks if the current element (data[i]) and the previous element (data[i - 1]) satisfy specific conditions. These conditions involve certain data properties, such as projectName and lineName.

If the current element’s projectName is the same as the previous element, it indicates the need for row merging. So, it increments the value at the corresponding position in the spanArr array and adds a 0 to spanArr to indicate the need for column merging.

Otherwise, if the projectName is different, it signifies the start of a new row in the next cell. Thus, a 1 is added to spanArr, and pos is updated to the current iteration index i. A similar logic is applied to spanArr2, corresponding to the lineName column.

This completes the cell merging for row data:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
handleList(){
this.dataTable.loading = true;
personTraceApi.findDetail({
projectCode: this.queryParam.query.projectCode,
lineCode: this.queryParam.query.lineCode,
beginDate: this.queryParam.query.beginDate,
endDate: this.queryParam.query.endDate,
name: this.queryParam.query.name,
employeeId: this.queryParam.query.employeeId,
page: this.queryParam.query.page,
limit: this.queryParam.query.limit,
}).then((res) => {
this.dataTable = Object.assign(this.dataTable,{
dataList: res.data.list,
total: res.data.totalRow
});
const modifiedData = [];
this.dataTable.dataList.forEach((item) => {
const modifieditem = {};
Object.keys(item).forEach((key) => {
// Eliminate single quotes on both sides of a string 'yyyy-mm-dd'
if (/^\d{4}-\d{2}-\d{2}'$/.test(key)){
modifieditem[key.slice(1,-1)] = item[key];
}else{
modifieditem[key] = item[key];
}
});
modifiedData.push(modifieditem);
});
this.dataTable.dataList = modifiedData;
this.dataTable.loading = false;
this.getSpanArr(this.dataTable.dataList);
this.dynamicColumns = this.generateDateRange(
this.queryParam.query.beginDate,
this.queryParam.query.endDate,
);
});
},

Additionally, I’ve added some logic in the code. Do you remember that the dates in the data received from the backend are in the format “ 'yyyy-mm-dd' “? These single quotes will be displayed in the frontend. So, I used a regular expression to match and remove the single quotes.

Excel Export

Excel export is somewhat similar to search, with the difference that the data collected doesn’t need to be paginated; it needs to be fully displayed. Here’s the frontend code for this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
handleExport(){
personTraceApi.exportExcel({
projectCode: this.queryParam.query.projectCode,
lineCode: this.queryParam.query.lineCode,
beginDate: this.queryParam.query.beginDate,
endDate: this.queryParam.query.endDate,
name: this.queryParam.query.name,
employeeId: this.queryParam.query.employeeId,
}).then((res) => {
const blob = new Blob([res],{
type: "application/vnd.ms-excel;charset=utf-8"
});
const downloadElemnt = document.createElement("a");
const href = window.URL.createObjectURL(blob);
downloadElemnt.herf = href
downloadElemnt.download = "personnelTraceability.xls"
document.body.appendChild(downloadElemnt);
downloadElemnt.click();
downloadElemnt.body.removeChild(downloadElemnt);
window.URL.revokeObjectURL(href);
})
}
  1. Once the request is successful and the export data is obtained (in the then section), the data is stored in the res variable in the form of a binary stream.
  2. Next, the code wraps the data in a binary format using a Blob object, setting the data type as an Excel file. This is done by setting the type.
  3. After that, the code creates an <a> element (anchor element) for triggering the download operation. It sets the href attribute, assigning the URL of the previously created Blob object to href.
  4. It then sets the download attribute for this <a> element, specifying the file name as “personnelTraceability.xls” for the download.
  5. The <a> element is added to the document.body, which positions it on the page.
  6. By triggering the click event, the code simulates the user clicking the link, initiating the file download operation.
  7. Finally, the code removes this <a> element from the document.body to avoid leaving unnecessary elements on the page.
  8. Lastly, it calls window.URL.revokeObjectURL to release the previously created Blob URL, freeing up browser resources.

So, what should the backend do?

1
2
3
4
5
@Override
public void exportExcel(PersonnelTraceabilityDTO queryParams, HttpServletResponse response) throws IOException {
List<Map> list = findList(queryParams);

}

First, retrieving data is the initial step of the process:

1
2
3
4
5
6
7
8
9
10
private List<Map> findList(PersonnelTraceabilityDTO queryParams) {
LocalDate startDate = LocalDate.parse(queryParams.getBeginDate(), DateTimeFormatter.ISO_DATE);
LocalDate endDate = LocalDate.parse(queryParams.getEndDate(), DateTimeFormatter.ISO_DATE);

List<String> dateList = getDateList(startDate,endDate);

String sql = getSql(queryParams,dateList);
List<Map> result = getSqlManager().execute(new SQLReady(sql),Map.class);
return result;
}

This method is almost identical to the previous backend method, with the only difference being that it doesn’t use PageResult for pagination.

In this project, it’s also necessary to translate the headers and data (as the initial data is in Chinese). I won’t go into detail about the translation process, but in summary, both the data and headers need to be displayed as separate lists and converted into the form List<List<Object>>:最后,以下代码会用于导出EXCEL:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
  @Override
public void exportExcel(PersonnelTraceabilityDTO queryParams, HttpServletResponse response) throws IOException {
List<Map> list = findList(queryParams);

...getTranslatedData...

File file = new File( "");
String filePath = null;
filePath = file.getCanonicalPath();
filePath = filePath +"/downloadExportTemplate";
if (!FileUtil.isDirectory(filePath)){
FileUtil.mkdir(filePath);
}
String fileName = filePath.concat( "/personnelTraceability.xls");

ExcelWriterBuilder writerBuilder = EasyExcel.write(fileName)
// set column width
.registerWriteHandler(new SimpleColumnWidthStyleStrategy(25))
.head(createdHead(headers));
writerBuilder.sheet("personnelTraceability").doWrite(datalist);

InputStream in = new FileInputStream(fileName);
String encode = URLEncoder.encode(fileName, "UTF-8");
response.setCharacterEncoding("UTF-8");
response.setContentType("application/vnd.ms-excel;charset=utf-8");
response.setHeader("Content-Disposition","attachment; filename=" + encode);
HSSFWorkbook workbook = new HSSFWorkbook(in);
workbook.write(response.getOutputStream());
in.close();
}
  1. File file = new File("");: This line creates an empty File object without specifying a file path.
  2. String filePath = null;: Initializes a string variable filePath.
  3. filePath = file.getCanonicalPath();: This line retrieves the working directory of the current project or application and stores it in filePath.
  4. filePath = filePath + "/downloadExportTemplate";: Appends the string "/downloadExportTemplate" to filePath to specify the download directory for files.
  5. if (!FileUtil.isDirectory(filePath)): Checks whether the specified directory exists. If it doesn’t, the following actions are performed.
  6. FileUtil.mkdir(filePath);: If the directory doesn’t exist, it creates the directory. This is done using a FileUtil class in the project or a third-party library; the details are not explained here.
  7. String fileName = filePath.concat("/personnelTraceability.xls");: Sets the file name and path for the generated Excel file.
  8. ExcelWriterBuilder writerBuilder = EasyExcel.write(fileName): Creates an EasyExcel writer to write data to the Excel file, specifying the file name and path.
  9. .registerWriteHandler(new SimpleColumnWidthStyleStrategy(25)): Sets the column width style, specifying a column width of 25.
  10. .head(createdHead(headers)): Sets the table header information for the Excel sheet. The createdHead(headers) method is used to create the header content.
  11. writerBuilder.sheet("personnelTraceability").doWrite(datalist): Creates a worksheet named “personnelTraceability” and writes the data list datalist to the Excel file.
  12. InputStream in = new FileInputStream(fileName): Opens the generated Excel file and creates an input stream.
  13. String encode = URLEncoder.encode(fileName, "UTF-8"): URL encodes the file name.
  14. response.setCharacterEncoding("UTF-8"): Sets the HTTP response character encoding to UTF-8.
  15. response.setContentType("application/vnd.ms-excel;charset=utf-8"): Sets the content type of the response to Excel.
  16. response.setHeader("Content-Disposition","attachment; filename=" + encode): Sets the response header to inform the browser that this is an attachment and provides the file name.
  17. HSSFWorkbook workbook = a new HSSFWorkbook(in): Creates an HSSFWorkbook object for handling the Excel file.
  18. workbook.write(response.getOutputStream()): Writes the Excel data to the output stream of the HTTP response for the browser to download.
  19. in.close(): Closes the input stream.

With this, the entire requirement has been largely completed.

Summarize

here is the complete corresponding code below:

Github


Oracle table triggers


Requirement Description

Today there is a new project requirement, which is related to database tables and relatively simple.

There is a table A now, which only records today’s data, and its data changes every five minutes. It deletes all previous data and inserts the updated data. Although table A deletes all previous data, the updated data is not modified for data from five minutes ago; it only performs an insertion operation.

TIPS: As for why we cannot directly perform an insertion operation here, the big data department on the other side has reported that this cannot be done, and the specific reasons are unknown.

In summary, based on the current requirements, a trigger is needed to insert the same data into table B when table A inserts data. However, the data in table B will not be deleted, and it will not only retain data for today. In addition, for the incremental table B, to avoid performance issues, it needs to be checked when table A inserts data. If the newly inserted data already exists in table B, it should not be inserted again.

Idea and Implementation

Here it is worth mentioning the MERGE INTO trigger function.

In Oracle SQL, the MERGE INTO statement is used to specify the target table for merging data; USING specifies the source table from which to get data; ‘ON‘ specifies the condition for matching rows in the source and target tables. If the condition is true, it performs an UPDATE operation when there is a match; otherwise, it performs an INSERT operation when there is no match. ‘WHEN NOT MATCHED THEN‘ specifies the block of operations to execute when there is no match between the source and target tables.

Clearly, here, using ‘MERGE INTO‘ can meet the requirements.

1
2
3
4
5
6
7
8
9
10
11
12
13
CREATE TRIGGER insert_trigger
AFTER INSERT ON A
REFERENCING OLD AS "OLD"
NEW AS "NEW"
FOR EACH ROW

BEGIN
MERGE INTO B
USING (SELECT 1 FROM DUAL) ON (B.employeeId = :NEW.employeeId and B.date = NEW:date)
WHEN NOT MATCHED THEN
INSERT (ID,employeeId,date,...)
VALUES (:NEW.ID,:NEW.employeeId,:NEW.date,...);
END;

In the code above, we created a trigger called insert_trigger. It is an ‘AFTER INSERT‘ trigger, which means it executes after data is inserted into table A

  1. When new data is inserted into table A, the trigger is activated.
  2. The trigger defines named references using ‘REFERENCING,’ using ‘OLD‘ to represent the previous rows (before the insertion) and ‘NEW‘ to represent the newly inserted rows.
  3. The trigger defines ‘FOR EACH ROW,’ indicating that it will execute once for each inserted row.
  4. Between ‘BEGIN‘ and ‘END‘ is the main logic of the trigger. Here, the trigger uses a ‘MERGE‘ statement to handle data in table B.
  5. The ‘MERGE INTO‘ statement allows data from a data source (in this case, ‘SELECT 1 FROM DUAL‘) to be merged into the target table B. In this example, it uses table B as the target.
  6. The ‘USING‘ clause specifies the data source, using a simple ‘SELECT 1 FROM DUAL.’ This clause is typically used to specify a virtual table or query to provide the data to be inserted.
  7. The ‘ON‘ clause defines the conditions for the merge operation. It uses a condition that matches the ‘employeeId‘ column in table B with the ‘employeeId‘ column in the newly inserted row in table A and also matches the ‘date‘ column with ‘NEW:date.’
  8. WHEN NOT MATCHED THEN‘ defines the operation to be executed when the condition doesn’t match during the merge operation. Here, it performs an insertion operation, inserting the data from the new row into table B. (Only when the ‘employeeId‘ column in table B does not match the ‘employeeId‘ column in the newly inserted row in table A and the ‘date‘ column does not match ‘NEW:date)
  9. The ‘INSERT‘ clause specifies the columns to be inserted and their corresponding values. These values come from the newly inserted row in table A (:NEW.ID, :NEW.employeeId, :NEW.date, etc.).

In summary, the trigger’s purpose is to check for matching rows in table B when new data is inserted into table A and insert the new data into table B if the conditions do not match.

With this, our trigger implementation is complete.

Summarize

This requirement is relatively straightforward, and the main takeaway is learning about a new trigger function, MERGE INTO.


Dashboard with dropdown boxes/Histogram


This is a new and particularly complete page dashboard requirement, including drop-down box design and corresponding data collection and restrictions, as well as the display of data histogram in Vue, etc.

Requirement Description

It is necessary to build a screen similar to the following, and the basic functional requirements included are:

项目记录-企业需求3

  • Implementation of drop-down box
  • Mandatory to select the project first to get the line and process drop-down box
  • Implementation of histogram
  • Implementation of date selection
  • input box for name, employee id
  • Implementation of data tables

Question

As written in the requirements description, the functions that need to be implemented are highly related to my problem:

  • How should the drop-down box function be implemented?

  • How should the department tree be obtained?

  • Line and process data need to obtain project data before displaying them. What should I do?

  • How should the dynamic histogram function be implemented?

  • How to query parameters such as date, name, and employee id in the backend?

  • How should the table data be presented?

Idea and Implementation

This project involves the Implementation of many functions. Let us think step by step. This time, we will not distinguish the front-end and back-end codes, but analyze the requirements, because a function often involves both the front-end code and the back-end code. It may be easier to understand if it is analyzed around requirements.

Dept department tree cascade selector

This piece I won’t talk about it in detail, because it involves company privacy, and it is also a completed interface call. The backend can directly call this interface and pass it to the frontend.

The front-end code here is implemented using the cascader of element-ui, and the Vue code is as follows:

(Note that although a cust-cascader component is encapsulated here, the function is similar to el-cascader)

1
2
3
4
5
6
7
8
9
10
11
<cust-cascader
v-model="queryParam.deptCode"
class= "filter-item"
:placeholder="Dept"
:props = "{value:'deptId',label:'name'}"
:show-all-levels="false"
:options="orgTree.treeList"
ref="cascader"
size="mini"
style="float:left;margin-right:15px;"
/>
  • The data bound by v-model will be passed to the backend through the front end in the future;
  • Attributes such as class, style, size are used for beautification;
  • The show-all-levels attribute is part of el-cascader and is used to determine whether to display the full path of the selected value (ie the full path of the department tree) in the input box
  • placeholder displays text inside the box

So where does the original data of this drop-down box come from? —— Obtained through the combination of options and props, the treeList of orgTree obtained by :options, this treeList is obtained from the data attribute, and the data attribute is implemented by placing it in created:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
data(){
return{
orgTree:{
treeList:[],
selectId:null,
defaultProps:{
children:"children",
label:"label"
}
}
}
},

import api from ...
methods:{
handleOrgTreeList(){
return api.getOrgList({orderBy:"orderNum"}).then((res) => {
const data = res.data;
this.orgTree.treeList = data;
})
}
},

created(){
this.handleOrgTreeList().then(() => {
if (this.$isNotEmpty(this.orgTree.treeList)){
const firstNode = this.orgTree.treeList[0];
}
})
}

It can be seen that when the component is created, the handleOrgTreeList method is called, that is, the back-end interface is called, the orgTree.treeList is assigned a value. Then it is naturally put into the cascading selector after the DOM is rendered and displayed.

Project and Type drop-down box

We know that according to the normal design logic, the content of the drop-down box should be displayed when the user clicks, instead of waiting for the user to enter a few characters before starting to search and display the data, so it is natural to think that the implementation of the method should be written in created or mounted.

In fact, the implementation logic of these two drop-down boxes is the same. I will use the Type drop-down box as an example to show here. The difference between the Project drop-down box and the Type drop-down box is only in the difference in SQL query.

This time, first write the content of the backend first. The backend first completes a very simple code logic in the Controller class:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
@Api("...")
@RequestMapping("/bi")
@RestController
public class BaseController{

private final BaseService baseService;

public BaseController(BaseService baseService){
// Constructor injection. If the code is bloated later, can add @AllArgsConstructora to remove this part
this.baseService = baseService;
}

@GetMapping("/type")
@ApiOperation
public WebResult<List<BaseEmployeeEntity>> findType(){
<List<BaseEmployeeEntity>> list = baseService.findType();
return WebResult.ok(list);
}
}

A GetMapping method:

  • As mentioned before, WebResult is an encapsulated class for customizing the mapping from return values to WSDL.
  • BaseEmployeeEntity is an entity class associated with a database. There are many fields in it, but all we need is the field associated with type, which will be used to obtain employee types: A, B, C, D.
  • QueryParams class, an encapsulated class that receives front-end parameters.

Next, the codes of the Service layer and the ServiceImpl layer are as follows:

BaseService

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
/**
* @author lu
*/

public interface BaseService extends SysBaseService<BaseMapper,BaseEmployeeEntity>{

/**
* @author lu
* @date 2023/...
* @version 1.0
* @return a list of type field, now (A,B,C,D)
*/
List<BaseEmployeeEntity> findType();
}


BaseServiceImpl

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
/**
* @author lu
*/

@Service
public class BaseServiceImpl extends SysBaseServiceImpl<BaseMapper,BaseEmployeeEntity> implements BaseService{

@Override
public List<BaseEmployeeEntity> findType(){
LambdaQuery<BaseEmployeeEntity> query = sqlManager.lambdaQuery(BaseEmployeeEntity.class);
// deduplication and check not empty
List<BaseEmployeeEntity> select = query.andIsNotNull(BaseEmployeeEntity::getType)
.groupBy(BaseEmployeeEntity::getType)
.select(BaseEmployeeEntity::getType);
return select;
}

The implementation layer of the Service first creates a sql query through beetl sql, then removes the data whose type column is empty in the data table through andIsNotNull, then removes duplication through groupBy, and finally returns through select. This code roughly executes the following SQL statement:

Note that the operation on the table here is actually the operation on the table associated with the BaseEmployeeEntity class

1
2
3
select type from ...
where type is not null
group by type

Through this SQL, the backend returns to the frontend the four employee types A, B, C, and D.

The logic of the Project drop-down box is basically the same as that of Type, so I won’t go into details here.

It is also worth mentioning that the display of this part of the front end:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
<el-select
:placeholder="project"
class="filter-item"
size="mini"
filterable
clearable
:loading="selectLoading"
v-model="queryParam.projectCode"
style="float:left;margin-right:15px;"
@change="prochange"
>
<el-option
v-for="item in projectArr"
:key="item.projectCode"
:label="item.projecName"
:value="item.projectCode"
>
</el-option>
</el-select>

<el-select
:placeholder="type"
class="filter-item"
size="mini"
filterable
clearable
:loading="selectLoading"
v-model="queryParam.type"
style="float:left;margin-right:15px;"
>
<el-option
v-for="(item,index) in typeArr"
:key="index"
:label="item.label"
:value="item.value"
>
</el-option>
</el-select>

element-ui has an adapted el-select component to help us display the drop-down box. What needs to be noted here is the loading attribute, which starts when the data loads for a long time, and there will be special effects of waiting in circles.

The value of the v-model two-way binding is the value that needs to be passed to the backend for searching in the future.

Where does the value of the drop-down box come from? This value needs to be obtained from the el-option component. The front end of el-option displays the value of the :label tag, and its corresponding index is the value of :value. Take the acquisition of Type and project as an example, el-option obviously displays the values of typeArr and projectArr in the drop-down box, and their values are obtained through back-end methods. Then, how do they interact with the back-end at the front-end?

The method used here is mapGetters, mapActions, of course, you can also choose to import an api directly on the front end.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
computed:{
...mapGetters({
projectArr:"getProjectArr",
typeArr:"getType",
}),
},
methods: {
...mapActions(['updateProjectArr','updateTypeArr']),
init(){
this.updateProjectArr();
this.updateTypeArr();
}
},
mounted() {
this.init();
},
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
import projectApi from ...
import lineProcessApi from ...

export default{
state:{
projectArr:[],
projectArrVisited:false,
typeArr:[],
typeArrVisited:false
},
getters:{
getProjectArr: state => state.projectArr,
getTypeArr: state => state.typeArr
},
mutations:{
setProjectArr(state,project){
state.projectArr = project
state.projectArrVisited = true
},
setTypeArr(state,type){
state.typeArr = type
state.typeArrVisited = true
}
},
actions:{
updateProjectArr({
commit,
state
},ignorant){
if(!state.projectArrVisited || ignorant){
return new Promise((resolve,reject) => {
let queryParam = {
query:{
projectCode:null,
projectName:null
},
pageSize:9999,
pageNum:1,
pageQuery:true,
fuzzyQuery:true,// default fuzzy query
orderby:null
}
projectApi.list(queryParam).then((res) => {
if(res.code === 200){
commit('setProjectArr',res.data)
resolve(res.data)
}else{
reject()
}
}).catch(() => {
reject()
});
})
}
},
updateTypeArr({
commit,
state
},ignorant){
if(!state.typeArrVisited || ignorant){
return new Promise((resolve,reject) => {
lineProcessApi.type().then((res) => {
if(res.code === 200){
commit('setTypeArr',res.data)
resolve(res.data)
}else{
reject()
}
}).catch(() => {
reject()
});
})
}
},
}
}

Vuex is a state management pattern + library developed specifically for Vue.js applications. It uses a centralized storage to manage the state of all components of the application, and uses corresponding rules to ensure that the state changes in a predictable manner.

Among them, state and getters are used to save the state; mutations and actions are used to change the state; the computed attribute in the Vue component is used to monitor the state; module is used to organize the state management code of the entire application, making the state division module easier to manage; the auxiliary function is used to simplify the code when monitoring the state, and createStore is used to create the state management object.

The corresponding data can be obtained directly through Vuex’s mapActions, such as ProjectArr.

So far, the function of the drop-down box has been implemented.

Conditional drop-down box

回顾一下我们的需求——除了dept,project和type下拉框,我们还有Line和process下拉框,这两个下拉框的数据有些不同,它必须要先选择了项目,才会出现下拉选项。而且,它还有一些映射关系:

  • 一个项目可能只有line,这个时候process下拉框没有数据
  • 一个项目可能只有process,这个时候line下拉框没有数据
  • 一个项目可能既有line又有process,这个时候两个下拉框都有数据

同时,这里数据的收集也很有意思,A表中的数据既有line和process,B表只有line的数据,所以假设项目a只有line,则必须在两个表中查询,不过A表可能没有需要的数据,只有B表才有。

这里在后端的实现与前面并没有什么区别,不过需要使用PostMapping而不是GetMapping了,因为需要接收一个project参数,然后再返回给前端Line和process的数据:

1
2
3
4
5
6
@PostMapping("/lineOrProcess")
@ApiOperation("lineOrProcess information")
public WebResult<List<BaseEmployeeEntity>> findLineOrProcess(@RequestBody BaseEmployeeEntity queryParam){
<List<BaseEmployeeEntity>> list = baseService.findLineOrProcess(queryParam);
return WebResult.ok(list);
}

值得思考的是,我们需要返回什么给数据?

一个思路是:一个标识符标识这是line还是process,这样我们返回的数据类似这样:

type code name
2 D1 process2
2 D2 process3
3 S1 line1

type为3代表是Line,type为2代表是process,code列用于标识不同的line或者process,而Name列的值用于显示在下拉框。

有了这个思路,就可以把这个想法在SQL中实现,后端的代码比较简单就不赘述了,serviceImpl调用Mapper方法,mapper方法再调用SQL即可。

SQL代码如下所示:

1
2
3
4
5
6
7
8
9
10
11
findLineOrProcess
====
SELECT line_type,line_code,line_name
from A
where project_code = #{projectCode}
group by line_type,line_code,line_name
union
select '3',line_code,line_name
from B
where project_code = #{projectCode}
group by line_code,line_name

#{projectCode} 可见,该SQL脚本需要projectCode作为查询参数。而这里之所以要这么设计,使用一个Union,是为了满足前面的条件,因为A表可能又有line又有process数据,B表只有line数据,那么,当我们传入项目编码,就可以同时在表中搜寻:

它有以下几种可能发生:

  • 该项目只有line,line数据只在A表。这种情况我们从A表获取到对应的数据,B表没有获取到数据,union后也不会影响到从A表获取到的数据;
  • 该项目只有line,line数据只在B表。这种情况我们从B表获取到对应的数据,A表没有获取到数据,union后也不会影响到从B表获取到的数据;
  • 该项目的Line数据在A,B表都有,因此我们会从A,B表分别获取数据,union后合并。并且从B表获取的数据我们直接给type列命名为了’3’,这是因为B表只有表示为3的line数据。同时,由于sql的特性,union在前面的那个表字段名会覆盖后面的那个表字段,这样达成了我们的要求。
  • 该项目只有process,process数据我们知道只在A表有,因此B表不会获取到数据,union后也不会影响到从A表获取到的数据;
  • 该项目既有process又有line,同该项目的Line数据在A,B表都有的情况。

到此,我们在后端对数据的采集完成了,前端怎么收集呢?

很显然,我们需要对收集到的数据的type做一个判断,然后把它们分别放到对应的el-option的循环里面展示:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
toSectionSearch(val){
// project can not be null
if(
this.queryParam.projectCode == "" || this.queryParam.projectCode == undefined
){
this.$message.error("project can not be null");
return;
}
// Some projects only have processes or lines, and some have both processes and lines
if(val != ""){
this.selectLoading = true;
this.lineList=[];
this.processList=[];
lineProcessApi.lineOrProcess({
projectCode:this.queryParam.projectCode
}).then((res) => {
let sectionData = res.data;
// lineType is 3 means line
this.lineList = sectionData.map((item) =>{
if(item.lineType == "3"){
return{
value:item.lineCode,
label:item.lineName
};
}else{
return{
value:null,
label:null
};
}
});
// lineType is 3 means process
this.processList = sectionData.map((item) => {
if(item.lineType == "2"){
return{
value:item.lineCode,
label:item.lineName
};
}else{
return{
value:null,
label:null
};
}
});
// remove all null
this.lineList = this.lineList.filter((item) => {
return item.value != null;
});
this.processList = this.processList.filter((item) => {
return item.value != null;
});
});
}else{
this.lineList=[];
this.processList=[];
}

this.$nextTick(() => {
this.selectLoading=false;
});
},
1
2
3
4
5
6
7
8
9
10
11
12
13
import BaseAPI from ...

// use to run controller class as default
const api = new BaseAPI('/bi')

api.lineOrProcess = function(query){
return api.httpRequest({
// baseUrl is the default url "/" when the url is empty
url:api.baseUrl + '/lineOrProcess',
method:'post',
data:query || {}
})
};

这里的Vue代码首先进行了project不能为空的判断,然后调用了lineProcessApilineOrProcess方法(如代码片段二)所示。

通过调用lineOrProcess方法,获取到的数据如我们之前所展示的表格那样。

然后我们进行了一个判断,使用Map函数对里面的每个值进行判断,如果type为2就把它放到processList里面,type为3就把它放到lineList里面,如果不是就赋值为null。值得注意的是因为,需要使用filter筛除掉这些null值,因为null值也会显示在下拉框里面,表示为一片空白。

至此功能就实现了,这里的Vue代码展示如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
<el-select
:placeholder="process"
class="filter-item"
size="mini"
filterable
clearable
:loading="selectLoading"
v-model="queryParam.processCode"
style="float:left;margin-right:15px;"
>
<el-option
v-for="(item,index) in processList"
:key="index"
:label="item.label"
:value="item.value"
>
</el-option>
</el-select>

<el-select
:placeholder="line"
class="filter-item"
size="mini"
filterable
clearable
:loading="selectLoading"
v-model="queryParam.lineCode"
style="float:left;margin-right:15px;"
>
<el-option
v-for="(item,index) in lineList"
:key="index"
:label="item.label"
:value="item.value"
>
</el-option>
</el-select>

Table data display

This part is relatively simple. element-ui has a corresponding el-table for data display. Here I directly use the packaged table component, called app-table:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
<div style="margin-top: 5px;">
<el-tabs v-model="tabCurrent">
<div style="margin-top: 6px;">
<app-table
ref="table"
class="type-table"
v-loading="dataTable.loading"
:boarder="true"
:data="dataTable.dataList"
:columns="tableFields"
overflow="auto"
></app-table>
</div>
</el-tabs>
<pagination
:total="dataTable.total"
:page.sync="queryParam.page"
:limit.sync="queryParam.limit"
@pagination="handleList"
></pagination>
</div>


:data stores the data from the backend, and the pagination component is used for pagination. It needs parameters such as page limit to display the number of pages and limit the number of data per page.

At the same time, the front-end method for obtaining data is also called through @pagination:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
async handleList(){
this.dataTable.loading=true;
await lineProcessApi.findDetail({
deptCode:this.queryParam.deptCode,
projectCode:this.queryParam.projectCode,
processCode:this.queryParam.processCode,
lineCode:this.queryParam.lineCode,
type:this.queryParam.type,
beginDate:this.queryParam.beginDate,
endDate:this.queryParam.endDate,
name:this.queryParam.name,
employeeId:this.queryParam.employeeId,
page:this.queryParam.page,
limit:this.queryParam.limit,
}).then((res) => {
this.dataTable = Object.assign(this.dataTable,{
dataList:res.data.list,
total:res.data.totalRow
});
this.dataTable.loading=false;
});
},

This is an asynchronous method, call the findDetail method of lineProcessApi, the parameters that need to be passed have been displayed, that is, deptCode, projectCode and other values, and finally assign the obtained data to the dataTable through Object.assign.

How is this part of the backend implemented?

In fact, the idea of dashboard design is very simple:

Understand what data the front-end needs, and what data the back-end needs to pass in from the front-end.

Starting from this idea, we know that if we want to do a query, the parameters that the front end needs to pass to the back end are the contents of the drop-down box or input box on the page:

That is, deptCode, projectCode, processCode, lineCode, name, employeeId… and other values, these values are stored in a DTO class at the back end, SQL query is performed through the mapper, and the query result is put into a VO class and returned to the front end. The data that the front end needs to receive is the data that needs to be displayed in the form, that is, employee information: name, address, type, etc.

So we can create a new DTO class as follows, and the VO class is similar:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
    /** 
* @author lu
* @date 2023/...
* @version 1.0
* ...
*/

@Data
@ApiModel("BaseDTO")
public class BaseDTO extends PageParam implements Serializable{

private static final long serialVersionUID = ...;

@Query
@ApiModelProperty("deptCode")
private String deptCode;

@Query
@ApiModelProperty("process")
private String processCode;
@Query
@ApiModelProperty("line")
private String lineCode;

@Query
@ApiModelProperty("type")
private String type;

@Query
@ApiModelProperty("beginDate")
private String beginDate;

@Query
@ApiModelProperty("endDate")
private String endDate;

@Query
@ApiModelProperty("name")
private String name;

@Query
@ApiModelProperty("employeeId")
private String employeeId;

}

This DTO class stores the data transmitted from the front end, and the code can be completed in the Controller and Service layers according to the normal process. It should be noted that the code of the ServiceImpl class:

1
2
3
4
5
6
7
8
9
10
@Override
publci PageResult<BaseVO> findDetail(BaseDTO queryParam){
// if deptCode is null, then give a default dept code by userId
if (StringUtils.isEmpty(queryParam.getDeptCode())){
queryParam.setDeptCode(anotherService.setHighDeptCode());
}
Map<String,Object> pageParam = query.getPageParam();
PageResult<BaseVO> vos = mapper.findDetail(queryParam.getPageRequest(),pageParam);
return vos;
}

Some null value judgments need to be made here, because the user has not passed any parameters when they just saw the page, but at this time the report needs to display some values, so some default values need to be set here.

pageParam is the paging class of beetlsql, which stores limit (the number of data displayed on each page) and page (the current number of pages) information.

These data will be combined and queried in the mapper layer through the @Root annotation:

1
2
3
4
5
6
7
   /**
* @author lu
* @date 2023/...
* @version 1.0
* ...
*/
PageResult<BaseVO> findDetail(PageRequest<BaseDTO> query, @Root Map pageParam);

The SQL code is as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
findDetail
====

SELECT -- @pageTag(){
name,employeeId,sex...
-- @}
FROM(
nbe.name as name,
nbe.employeeId as employeeId,
...
FROM
employee nbe INNER JOIN(
SELECT nbe.deptId,nbd.deptName
FROM department nbd START WITH nbd.deptId = #{deptCode}
CONNECT BY nbd.up_deptId = PRIOR nbd.deptId
) nbd ON nbd.deptId = nbe.deptId
LEFT JOIN ...
WHERE 1=1
-- @if(!isEmpty(projectCode) || !isBlank(projectCode)){
and nbe.projectCode = #{projectCode}
-- @}
-- @if(...){
...
-- @}
...
GROUP BY
nbe.name,
nbe.employeeId,
...)

Ignore the content of -- @ for the time being, the sql logic here is relatively easy to understand, associate several related tables, and then define the corresponding qualification conditions of where, and then use Group by to deduplicate. It is worth mentioning the content of beetlsql here. First of all, the reason why a layer of select is nested in the outermost layer of select is the requirement of beetlsql. If you want to use Group by for paging, you need to nest this query in the outermost layer.

So why not use distinct for deduplication? Here is a defect in beetlsql paging.

beetlsql needs to calculate a TotalRow attribute when paging. This attribute is used to calculate the total number of data, and then display the corresponding number of pages when paging. But the calculation method of this total has nothing to do with the content written in select, beetlsql directly calls the select count(*) function to create a new SQL query as the result of TotalRow, which also leads to select distinct result conflicts with it (because select count(*) will not deduplicate), so Group by needs to be used here.

But another problem is that when beetlsql uses Group by, it cannot directly page the results of Group by, it needs to nest a layer of select outside to page, so the SQL here is written as an inner and outer nested format.

-- @pageTag() is used in select, which can implement paging calculation;

-- @if(!isEmpty(projectCode) || !isBlank(projectCode)) is used for conditional judgment. Only when the result in the if statement is valid, the following SQL statement will take effect.

So far, the overall logic of the data table has been implemented.

Histogram display

This requirement requires dynamic display of the number of employee contract expirations in each month in the next ten months, and if the number of employee expirations in the current month is 0, it must still be displayed.

The drawing of the histogram cannot be started directly from the backend, because we don’t know what parameters are needed now.

The required parameters depend on how the page is drawn, and the Echarts component needs to be used here. So here I create a new Vue component: histogram.vue

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
<template>
<div
ref="echarts"
class="echarts"
:style="{width,height}"
></div>
</template>

<script>
import * as echarts from 'echarts';
import ...

export default {
components:{},
props:{
width:{
type:String,
default:"2400px"
},
height:{
type:String,
default:"200px"
},
options:Object
},
watch:{
options:{
handler:function(newVal,oldVal){
this.redraw()
},
deep:true
}
},
computed:{
myOptions(){
return this.options || {}
}
},
data(){
return{
myChart:null
};
},
mounted() {
this.init()
},
destroyed() {
window.onresize=null
this.myChart = null
bus.$off('toggleSideBar')
},
methods: {
init(){
this.myChart = echarts.init(this.$refs.echarts)
this.myChart.setOption(this.myOptions)
window.onresize = () => this.myChart.resize()
bus.$on('toggleSideBar',data=>{
this.redraw()
window.onresize = () => this.myChart.resize()
})
},
showLoading(){
this.myChart.showLoading()
},
hideLoading(){
this.myChart.hideLoading()
},
redraw(){
if(!this.myOptions) return
this.myChart.clear()
this.myChart.setOption(this.myOptions)
this.myChart.resize()
}
},
}
</script>

<style lang="scss" scoped>
.echarts{
left:0;
right:0;
top:0;
bottom:0;
margin: 0;
padding: 0;
}

</style>

This drawing component needs to accept some properties passed to it, its use is as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
<template>
<div class="app-container">
<div class="filter-container header-search" style="display:flex;flex-wrap:wrap;">

<el-row style="margin-top: 20px;">
<el-col :span="10">
<div class="echart-box">
<histogram
:options="myOptions"
ref="typeByMonth"
></histogram>
</div>
</el-col>
</el-row>

</div>
</div>
</template>

<script>
import Histogram from "...";

export default {
name:"...",
components:{
AppTable,
AppFilterForm,
Histogram,
CustCascader
},
data() {
return {
myOptions:{
title:{
text:"...",
show:true
},
tooltip:{
trigger:"axis",
axisPointer:{
type:"shadow"
},
},
grid:{
left:"...",
right:"..."
},
xAxis:{
type:"category",
data:[],
axisPointer:{
type:"shadow",
},
},
yAxis:{
type:"value"
},
series:[
{
name:"...",
type:"bar",
itemStyle:{
normal:{
color: "#5470c6",
},
},
data:[]
},
],
},
};
},
methods: {
// plot chart
typeByMonth(){
lineProcessApi.plotChart({
deptCode:this.queryParam.deptCode,
projectCode:this.queryParam.projectCode,
processCode:this.queryParam.processCode,
lineCode:this.queryParam.lineCode,
type:this.queryParam.type,
beginDate:this.queryParam.beginDate,
endDate:this.queryParam.endDate,
name:this.queryParam.name,
employeeId:this.queryParam.employeeId,
page:this.queryParam.page,
limit:this.queryParam.limit,
}).then((res) => {
if(res.code == 200){
this.chartList = res.data;
this.myOptions.xAxis.data = this.chartList.map((item) => item.month);
this.myOptions.series[0].data = this.chartList.map((item) => item.count);
this.$refs.typeByMonth.redraw();
}
});
},
},


};
</script>

<style lang="scss" scoped>
.echart-box{
padding: 15px;
border: 1px solid #dcdfe6;
box-shadow: 0 2px 4px 0 rgba(0, 0,0,0.12), 0 0 6px 0 rgba(0, 0,0,0.04);
}

</style>

Here, it can be seen that the data of the histogram is passed through :options, that is, the watch attribute in the component, and the method of passing the data is executed by calling the typeByMonth method in ref.

This method is very similar to the previous data query method, but the difference is that the accepted data no longer needs to be displayed in each column, but as follows:

month count
2023-1 300
2023-2 350
2023-3 400

The data in the month column will be used as the x-axis display, that is, which months are they:

this.myOptions.xAxis.data = this.chartList.map((item) => item.month);

The data in the count column will be displayed on the y-axis, that is, the number of employees per month:

this.myOptions.series[0].data = this.chartList.map((item) => item.count);

Therefore, the number of parameters received by the front end is much less, and this part of the logic only needs to be slightly modified based on the logic of the backend query in the previous article. In fact, it is to make a little modification in the SQL code, no longer return the detailed information of select, but return the result of count function after group by months.

However, it is worth noting that the monthly data needs to be dynamically displayed here, so part of the logic needs to be completed in the backend; at the same time, the requirement also reminds that if the number of employees due in the current month is 0, it still needs to be displayed. This conflicts with the expression logic of SQL, because the 0 value may be ignored during group by (of course, it can also be ignored through SQL), but it may be better to write this part of the code in the backend:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
@Override
public List<chartVO> plotChart(chartDTO queryParam){
// if deptCode is null, then give a default dept code by userId
if (StringUtils.isEmpty(queryParam.getDeptCode())){
queryParam.setDeptCode(anotherService.setHighDeptCode());
}
List<chartVO> chartData = mapper.plotChart(queryParam);
List<chartVO> chartVO = nextNmonthData(10);
// Map, time complexity O(n)
HashMap<String,String> map = new HashMap<>(chartData.size());
for(int i = 0;i < chartData.size();i++){
map.put(chartData.get(i).getMonth(),chartData.get(i).getCount());
}
for(chartVO vo : chartVO){
if(map.containsKey(vo.getMonth())){
vo.setCount(map.get(vo.getMonth()));
}
}
return chartVO;
}

/** return next N month data
*
* @author lu
* @date 2023/...
* @version 1.0
* ...
*/
private List<chartVO> nextNmonthData(int n){
AyyayList<chartVO> arr = new ArrayList<>(n);
LocalDate today = LocalDate.now();
// plus n month
for(int i = 0;i < n;i++){
LocalDate localDate = today.plusMonth(i);
String ym = LocalDate.format(DateTimeFormatter.ofPattern("yyyy-mm"));
chartVO vo = new chartVO();
vo.setMonth(ym);
vo.setCount("0");
arr.add(vo);
}
return arr;
}

Here I have written a new method to create a list that stores the next ten months and is all 0 — chartVO. The chartVO class has only two attributes, one is month and the other is count.

Then, HashMap is used here to traverse the query results of SQL. When there is data in the month corresponding to the SQL result, the data of the corresponding month in chartVO is changed from 0 to the corresponding data. The reason why HashMap is used is to optimize Time complexity, limited to O(N).

So far, the display of the histogram is completed.

Summarize

here is the complete corresponding code below:
Github
In general, the requirements this time are much more difficult than before, but the overall logic is not difficult. To complete the report requirements, only need to think clearly:

What data does the front end need, and what data does the back end need to pass in from the front end?

In addition, I also learned some ideas about functional design:

  1. Cascade selector design
  2. Drop-down box design and back-end interaction
  3. The drop-down box with prerequisite requirements interacts with the backend
  4. Table data display and back-end interaction
  5. Histogram design


Scheduling email task optimization (Outlook mailbox)


Requirement Description

The previous employee who designed the scheduling task directly returned the data in the form of toString after receiving the scheduling task data, resulting in the content of each email looking like this:

Project Records

Obviously, this is not a mode that is conducive to human reading, so this requirement requires me to change the style of this requirement, display the data in the form of a table, and attach an EXCEL file. A reasonable style should look like this:

Project Records

Question

Seeing the requirement, the next problem to be solved is:

  • How to debug the scheduled task?

  • What does the acquired data look like?

  • How should EXCEL be written as an attachment to an email?

  • How to make the mail display as required?

Idea and Implementation

This project is more to expand my knowledge, understand how to use timed scheduling tasks, and understand EasyPoi’s export excel and other functions.

Also, here’s a backend project:

Back end

First of all, the stage where this code has been completed looks like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
@Scheduled(cron = "0 0 23 * * ?")
@Transactional(rollbackFor = Exception.class)
public void removeEmployee() throw Exception{
... get data by SQL ...

List<sysUserEntity> userList = new ArrayList<>();
List<businessUserEntity> arrList = new ArrayList<>();

... put data into these two arraylist ...

MailUtil.send("...@...",userList.toString,...);
MailUtil.send("...@...",arrList.toString,...);
}

Now, think in terms of the approach we mentioned earlier to address project requirements:

Solve requirements by thinking about problems

For the first question, **how to debug the scheduled task? **

For the method of debugging scheduled tasks, please refer to my another post

The second question, **What does the acquired data look like? **

In the previous code, the sending task was executed twice, and two arrays of userList and arrList were sent. These two arrays wrapped the two entity classes sysUserEntity and businessUserEntity, so we can think that what was sent was the two entity classes data.

And to send it twice at the same time, they may have similar operations, so I thought that I might need to write a generic method to receive these two entity classes.

The third question, **how should EXCEL be written as an attachment to an email? **

Maybe some APIs should be used here, such as Hutool API’s MailUtil.send method to send emails.

This method receives parameters such as the mailbox name, and more importantly, it also receives an html template parameter, so here we can write a html template configuration file to optimize the original email format.

At this point, the basic idea is already there.

The fourth question, **How to make the mail display as required? **

This problem needs to be analyzed step by step, and it can be expanded into many small problems.

First of all, we are going to use the Hutool API’s MailUtil.send method to send emails, what parameters does this method receive?

MailUtil. send

Because this method is overloaded, I will directly introduce the parameters of the method we are going to use:

The source code of this method of Hutool is as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
/**
* Send mail to multiple people
*
* @param tos recipient list
* @param subject title
* @param content text
* @param isHtml Whether it is in HTML format
* @param files list of attachments
*/
public static void send( Collection<String> tos, String subject, String content, boolean isHtml, File... files) {
Mail.setTos(tos.toArray(new String[tos.size()]))//
.setTitle(subject)//
.setContent(content)//
.setHtml(isHtml)//
.setFiles(files)//
.send();
}

To summarize, it needs these parameters:

  • tos recipient, normally you can set more than one recipient, here is only one recipient
  • subject email subject
  • content The body of the email, which can be text or HTML content
  • isHtml is HTML, if yes, parameter 3 is identified as HTML content
  • File optional: attachment, can be multiple or none, just add the File object to the last variable parameter

Therefore, it can be seen that if we want to modify the format of the email, we need to add an html page in the parameters. Moreover, this html page also needs to pass in the data and display it in the mail. So, here I used template of BeelUtil, which requires a btl file.

This file is added at the beginning of the email in html format

“Dear Manager: Employee who resigned … “ In this passage, a table is generated with html tags such as td at the same time, and the data is dynamically generated using jsp, so I won’t show it in detail here. Just follow the format of the front-end jsp page and create it with your own ideas.

Here the code to import templates using BeelUtil is as follows:

1
2
3
4
5
GroupTemplate gt = BeelUtil.getGt();
Template template = gt.getTemplate(templateKeyPath);
Map<String,Object> map = new HashMap<>();
map.put("lists",listsCopy);
template.binding(map)

templateKeyPath is the path of the btl file, and listsCopy is the data of the copied arraylist. The reason to make a copy is because if the original data is not copied, the data will disappear after the template is bound, and finally an empty email will be sent to the template that cannot find the required data.

Excel Export

The template is done, now we need to add an excel attachment to the email, so we need to export the data to an excel first, and then put the excel file in the parameters of MailUtil.send.

For the export of Excel files, the ExcelExportUtil.exportExcel method of EasyPoi is used here

This method also has many overloads, choose the one we want to use, source code is shown below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
/**
* Create corresponding Excel based on Entity
*
* @param entity
* Table title attribute
* @param pojoClass
* Excel object Class
* @param dataSet
* Excel object data List
*/
public static Workbook exportExcel(ExportParams entity, Class<?> pojoClass, Collection<?> dataSet) {
Workbook workbook;
if (ExcelType.HSSF.equals(entity.getType())) {
workbook = new HSSFWorkbook();
} else if (dataSet.size() < 1000) {
workbook = new XSSFWorkbook();
} else {
workbook = new SXSSFWorkbook();
}
new ExcelExportServer().createSheet(workbook, entity, pojoClass, dataSet,null);
return workbook;
}

It can be seen that the parameters required by the exportExcel method are:

  • entity exportParams class, which contains various attributes of the excel file, such as title, title line content, etc.
  • pojoClass The entity class that needs to be transformed, in this example it is sysUserEntity.class and businessUserEntity.class
  • dataSet incoming data

From the perspective of privacy, I will not introduce the exportParams I want to set here in detail, but encapsulate it in the getExportParams method. Finally, read the exported excel as the parameter of MailUtil.send. Specifically as follows:

1
2
3
4
5
6
7
8
9
10
11
// set title and other attributes in excel    
ExportParams export = this.getExportParams(title);

// export Excel
Workbook workbook = ExcelExportUtil.exportExcel(export,entityClass,lists);
...
com.spire.xls.Workbook _workbook = poiUtils.convertToSpireWorkbook(workbook);
// encapsulation file outputStream method
File file = this.getFileAttach(_workbook);
// send email
MailUtil.send("....@...",subjectTitle,template.render(),true,file);
  • title is the title string to be passed in manually, and is set as the internal title of excel through getExportParams.

  • subjectTitle is the subject string to be passed in manually.

  • template.render() is to render the template template obtained above

But there is a problem here, this entityClass is the required class, and for this example, we know that it refers to sysUserEntity.class and businessUserEntity.class, but this is a generic method! You cannot access T.class directly because of type erasure.

So, here comes the question - how to get the generic type?

Reflection to get the generic Class object

Trying to write T.class in the idea fails even to compile, so what should I do?

Although generics will be erased during bytecode compilation, Class objects will record their implemented interfaces and inherited parent class information through java.lang.reflect.Type. Let’s take ArrayList<E> as an example:

1
2
3
4
ArrayList<String> strings = new ArrayList<>();
Type genericSuperclass = strings.getClass().getGenericSuperclass();
// genericInterfaces = java.util.AbstractList<E>
System.out.println("genericSuperclass = " + genericSuperclass);

The result is the placeholder E.

But obviously, what we want is String instead of that E.

genericSuperclass is Type type, and Type has four types:

  • GenericArrayType is used to describe a generic array of parameters.
  • WildcardType is used to describe the generic type related to the wildcard ?, including ?, the lower bound wildcard ? super E, and the upper bound wildcard ? extend E.
  • Class<T> is used to describe the Class object of the class.
  • ParameterizedType is used to describe parameterized types.

Take a look at instanceof and what will they output?

1
2
3
4
5
6
7
8
ArrayList<String> strings = new ArrayList<>();

Type genericSuperclass = strings.getClass().getGenericSuperclass();

System.out.println( genericSuperclass instanceof ParameterizedType); // true
System.out.println( genericSuperclass instanceof Class); // false
System.out.println( genericSuperclass instanceof WildcardType); // false
System.out.println( genericSuperclass instanceof GenericArrayType); // false

So, what do I gain by choosing a parameterized type approach?

1
2
3
ParameterizedType parameterizedType = (ParameterizedType) genericSuperclass;
Type[] actualTypeArguments = parameterizedType.getActualTypeArguments();
System.out.println("actualTypeArguments = " + Arrays.toString(actualTypeArguments));

There is only one [E] in the returned Type[] array, and it seems that the result is still E, which seems to have failed.

But why?

The reason is actually on ArrayList, look at the source code of ArrayList:

1
2
3
4
5
public class ArrayList<E> extends AbstractList<E>
implements List<E>, RandomAccess, Cloneable, java.io.Serializable
{
...
}

When ArrayList is instantiated, it only specifies its own generic type and does not specify the specific generic type of the parent class AbstractList, so the placeholder E is still obtained.

So here, in actual operation, what should I do if I just want the String type in arrayList?

In fact, there is a simple method: (build anonymous subclass implementation)

// Look at the curly braces added at the end

ArrayList strings = new ArrayList(){};

We can rewrite the method of implementing the parent class and specify the generic concrete type of the parent class through braces {}. why? Because adding a brace is equivalent to defining an anonymous inner class, and the type can be determined to be String at compile time.

Here we can write the project like this:

1
2
Type genericSuperclass = lists.getClass().getGenericSuperclass();
Class<?> entityClass = (Class<?>)((ParameterizedType) genericSuperclass).getActualTypeArguments()[0];

lists is the ArrayList we want to pass in.

Set the recipient in the configuration file

At this point, the code has basically been completed, but the recipient’s email address is fixed, which is fixed in the first parameter of MailUtil.send.

What if I still want to change the recipient email address in the configuration file?

Very simple, set key-value pairs in application.properties:

auto.email=”aaaaa@bbbbb.com

Inject in Java Bean:

1
2
@Value("${auto.email}")
private String email;

Then write this.email in the parameter of MailUtil.send.

Summarize

The complete code is roughly like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
@Value("${auto.email}")
private String email;

@Override
public <T> void sendDeletedEmployeeAuto(List<T> lists, String templateKeyPath,String subjectTitle,
String title) throws IOException{
if (CollectionUtils.isNotEmpty(lists) && lists.size() > 0){
// Get the generic type
Type genericSuperclass = lists.getClass().getGenericSuperclass();
Class<?> entityClass = (Class<?>)((ParameterizedType) genericSuperclass).getActualTypeArguments()[0];

// Copy arraylist to help template read
List<T> listsCopy = new ArrrayList<>();
listsCopy = ListUtil.toCopyOnWriteArrayList(lists);

// template
GroupTemplate gt = BeelUtil.getGt();
Template template = gt.getTemplate(templateKeyPath);
Map<String,Object> map = new HashMap<>();
map.put("lists",listsCopy);
template.binding(map)

// set title in excel
ExportParams export = this.getExportParams(title);

// export Excel
Workbook workbook = ExcelExportUtil.exportExcel(export,entityClass,lists);
...
com.spire.xls.Workbook _workbook = poiUtils.convertToSpireWorkbook(workbook);
// encapsulation file outputStream method
File file = this.getFileAttach(_workbook);
// send email
MailUtil.send(this.email,subjectTitle,template.render(),true,file);

}

}

After sending the email, it can be displayed according to the requirements. The specific style depends on the html file written by yourself.

This requirement is a brand-new project that I haven’t touched before, and I also tried to write a generic method by myself, which can be said to be very interesting. Mainly learned:

  1. Use of MailUtil.send method of Hutool
  2. EasyPoi‘s ExcelExportUtil.exportExcel method uses
  3. How to get the generic type through reflection
  4. How to inject configuration file attributes into classes

Click to drill down to a new page

Indie project started

Requirement Description

In a word, add a click method to the table data in the Vue table, click the data to a new page, and the new page has detailed information about the click data.

Suppose I now have a pie chart that has been implemented, showing the percentages of the four positions of ABCD. At the same time, there is a table below the pie chart, showing the position name, number of people and percentage, roughly as follows:

【pie chart】 <- pie chat is here ,put it on top of the form

Job type Number of people Percentage
A 100 10%
B 200 20%
C 300 30%
D 400 40%

Now we need to click on the number of people (that is, click on 100 in the above table), it will jump to a new page, and the new page will display the information of these 100 people in the form of a table, and implement pagination at the same time. As follows:

Job type Name EmployeeId Dept Position Sex
A pyrrhic 025618 IT Staff M
A Mike 159736 Business CEO F
A Done 789156 IT Junior F

Question

Seeing the requirements, I thought about it and felt that the problems that need to be solved are:

  • How to add front-end click event?

  • How does the front end implement jumping to a new page?

  • What functionality does the new page need to implement?

  • The original data already exists, how should I obtain it? Where can I get it?

  • What parameters are required for the front and back ends?

Idea and Implementation

Here, unlike the previous project, not only “Frontend Code Mapping Backend Code”, but the new idea to solve enterprise projects is:

Solve requirements by thinking about problems

Front end

  • The first question: How to add the front-end click event?

The el-table of element-ui used by the implementer of the original function here implements the table function, but we know that the data of the Vue table is not hard-coded, but dynamically bound through :data, and the el-table component has no column binding event, how can I add a click event to the corresponding column?

This has troubled me for a long time. There are too many ambiguous statements on the Internet, and I haven’t found a very suitable one. In the end I decided to look at other pages where this was already implemented, looking for a possible solution. This gave birth to a new idea for solving enterprise projects:

learn by analogy

Through my unremitting thinking and searching, I found that a similar requirement was also implemented on another page. Although it does not add a click event to the data, it judges the data of a certain column in the table, and returns an “empty” string if it is empty.

This gave me an inspiration, just write like this:

1
2
3
4
<template #value = "scope">
<span @click="employeeDetail(scope.row)">{{scope.row.value}}</span>
</template>

Bind the column with the slot attribute in the data through the #, and then call the click method, and Vue can find the corresponding method. The method here needs to be able to jump to a new page.

  • Then, the second question: How does the front end implement jumping to a new page?

Here you need to use the push method, namely:

1
2
3
4
5
6
7
8
9
10
methods:{
employeeDetail(row){
this.$route.push({
name: ...,
params:{
...
}
})
}
}

Obviously, find router.vue in the company code mountain, add the name and link of the page you want to jump to, and point to a new Vue component you want to create – the Vue component you are about to create is the new page, now just You need to create a new file!

  • For the third question: **What functionality does the new page need to implement? **

Obviously, I need an el-table table to store the data from the backend, and I also need a pagination tag to paginate the table.

So far, the basic framework and content of the front-end have been basically structured, now look at the back-end

Back end

  • The fourth question: The original data already exists, how should I obtain it? Where can I get it?
  • The fifth question: **What parameters are required for the front and back ends? **

Both of these problems can be solved with back end.

By analyzing the back-end method, we can understand that the number of employees in the back-end is obtained through this logic:

Controller -> Service -> Mapper -> SQL

Finally, the number of people obtained through the count function of SQL is roughly as follows:

1
2
3
4
5
select count(employeeId) as A
from ... inner join ...
where deptId = #{deptId}
and jobType = #{jobType}
and ...

Here I would like to make complaints about the fantastic combination of BeetlSQL + Oracle SQL used by the company. There are many different Mysql syntaxes that I have known before, and it can even add an If function. It can be said to be very peculiar and opened my eyes.

Then the logic can be slightly modified to get the detailed information of the employee:

1
2
3
4
5
select jobType,Name,employeeId,dept,position,sex
from ... inner join ...
where deptId = #{deptId}
and jobType = #{jobType}
and ...

There is no need to change any parameters, which can be said to be very convenient. And you can also guess that the required parameters should be the department ID (deptId) and job type (jobType). In addition, pagination may also require parameters page and limit.

You only need to create a new VO class to store the returned jobType, Name, employeeId, dept, position, sex information, skip the Controller class and Service class code, and directly look at the implementation class of the Service layer:

1
2
3
4
5
6
7
8
9
10
11
12
13
@Override
public PageResult<BaseVO> findEmployeeDetail(BaseDTO query){
// If deptId is null, set a default id by its login session
if (StringUtils.isEmpty(query.getDeptId())){
query.setDeptId(getDefaultDeptId());
}
// Extract parameters except pagination parameters
Map<String,Object> pageParam = query.getPageParam();

PageResult<BaseVO> BaseVOs = mapper.findEmployeeDetail(query,pageParam);
return BaseVOs;

}

A null pointer is prevented here, and then the paging logic is completed, only need to ask the front end for the parameters passed by the previous page.

At this point, the things that need to be done at the back end are basically clear, and the overall requirements are already very clear.

Summarize

Finally, on the new page, the front end can pass parameters through the api method (receive this.$router.params.deptId from the previous page push, etc.), and then pass in a few more paging parameters, such as page:1 and limit:10. Finally, the Controller method is called, and the returned data can be received by the front end through .then(res => ...).

This requirement is a bit more difficult than the previous one, and more importantly, there are two ideas for solving project requirements:

  1. Solve requirements by thinking about problems
  2. learn by analogy

Connection Migration

Recently, I received my first requirement, to migrate an already prepared page so that it can be accessed and displayed externally. What does that mean? This requirement will be explained later.

In short, as a new developer, I have to say that I am confused when faced with huge enterprise-level codes and hundreds of classes in the back-end and front-end, not to mention the need to directly face a requirement that I have never touched before. Of course, the product manager also understands that it is unrealistic for me to do difficult requirements. The actual implementation of this requirement is not difficult. Many key parts of the code have been completed by others, but it is still a huge challenge for me.

Requirement Description

To put it simply, the requirements of this project require me to make the pages originally placed in the management system accessible through external links.

Many front-end pages and functions are displayed in a way similar to vue-element-admin. I need to take one of the completed vue pages - a BI report, and bypass the login system so that it can be accessed directly through a link (of course, it is not completely unnecessary for login verification, it is just a different form of verification). As for how it will be used after completion, I am not too clear now, there may be some hyperlinks of other webs pointing here.

Question

The requirements are like this, but there are still many incomprehensible problems:

  • Link? What kind of link to visit? What should it look like?
  • How should the front end be implemented?
  • What does the backend need to do?
  • What have other colleagues already done?

Ideas and Implementation

Front end

As a newcomer, I need to keep learning, but of course I can’t just study hard by myself.

By consulting the seniors, I learned that the implementation of the connection migration function is implemented on a Vue page. It is also through this lesson that I learned the first idea of understanding the project and cracking the requirement:

Frontend Code Mapping Backend Code

Here is a rough front-end page structure, withholding specific information for privacy reasons:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
<template>
<div>
<component :is="XXcomponents[index]"></component>
</div>
</template>

<script>
import ...

export default{
data(){
return {
XXcomponents:[
'a',
'b',
'c'...
],
index:0
}
},
created(){
window.myData = this
this.routeData = this.$route.query
this.method()
},
methods:{
method(){
this.$store.dispatch('/.../loginByUsername',this.routeData.data)
.then(() => {
let index = this.XXcomponents.findIndex(i => i == this.routeData.name )
...
}).catch(() => {
...
})
}
},
components:{
a,
b,
c...
}
}

<script>
<style>...</style>

The template tag here presents the front-end content. You can see that there is only one dynamic component bound by:is. Let’s imagine that what we need to do is connection migration, so there is reason to guess that the components here may determine what the incoming page is based on the index, and then display it on the front end.

So I looked at the data in the data attribute, and I could see that XXcomponents corresponded to the component in the components attribute. These components were imported through import and other written Vue pages, which confirmed my guess.

Now, we already know how the front end implements the connection migration function through a Vue page:

Switch the Vue page to be displayed in the list by switching the index

So, what about the backend?

I see that this.$route.query is written in the Vue life cycle create here, which means that this Vue page needs to receive parameters when it is created.

The parameters here are written directly in the URL, the result of this.$route.query is written into this.routeData, then this.routeData.name and this.routeData.data appear in the following code, so there is reason to believe that the parameters in the incoming link should be similar to this, for example:

localhost:8888/index/data=123&name=123

The json data is:

1
{data:123, name:123}

Back end

The matter of receiving parameters is often related to the backend, so what parameters does it receive?

I don’t know yet, but there is a method called method behind it, which uses an asynchronous dispatch method internally, and sends parameters to a method called loginByUsername, and the value sent is the value corresponding to the name attribute in the link we mentioned earlier.

Now, the front end has accessed a method called loginByUsername on the back end, and passed the value corresponding to the name attribute in Json format.

Open the backend idea and enter: ctrl + shift + f to find the matching loginByUsername method.

Soon I found a matching method, a @PostMapping method written in @Controller:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
@PostMapping
public WebResult loginByUsername (@Requestbody UserNameDTO userNameDTO){
String userName = (userNameDTO.getUserName())
// get token
String token = ...(userName);
......
return WebResult.ok(token);
}

@Data
class UserNameDTO{

public String userName;
}

This method receives a UserNameDTO entity class of @Requestbody, and then uses it to obtain the username, and returns the username to the front end as authentication information after being encrypted by Base64 through the method encapsulated by the Service layer. (WebResult is an encapsulated class for customizing the mapping from return values to WSDL)

So far, the function of the backend has also been confirmed:

The backend needs the userName information passed in by the frontend, and then generates a token for authentication, web page jump, or connection migration through the userName and returns it to the frontend.

Summarize

Finally, the data attribute of the link will be used by the findIndex method later to determine the index, and then the corresponding Vue page will be displayed according to the index match.

At this point, we can visit the following link:

localhost:8888/index/data=study&name=James

It will match a study.vue front-end page belonging to James and display the corresponding information.

In general, this requirement is not difficult. The implementation of the core functions, such as dispatching the corresponding URL, matching the corresponding Vue page according to the Index, the implementation of the back-end token generation and other functions have been encapsulated. However, as the first requirement, it is also difficult for me, and more importantly, I have learned:

  1. How to sort out ideas in enterprise projects - Frontend Code Mapping Backend Code
  2. ctrl + shift + f